Test Report: KVM_Linux 19166

                    
                      98210e04775e460720dbaecad9184210c804dd29:2024-07-01:35133
                    
                

Test fail (8/341)

x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (146.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-735960 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-735960 -v=7 --alsologtostderr
E0701 12:20:33.803031  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/functional-377045/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-735960 -v=7 --alsologtostderr: (40.842561302s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-735960 --wait=true -v=7 --alsologtostderr
E0701 12:21:55.724202  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/functional-377045/client.crt: no such file or directory
ha_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p ha-735960 --wait=true -v=7 --alsologtostderr: exit status 90 (1m44.497689423s)

                                                
                                                
-- stdout --
	* [ha-735960] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19166
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19166-630650/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19166-630650/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "ha-735960" primary control-plane node in "ha-735960" cluster
	* Restarting existing kvm2 VM for "ha-735960" ...
	* Preparing Kubernetes v1.30.2 on Docker 27.0.1 ...
	* Enabled addons: 
	
	* Starting "ha-735960-m02" control-plane node in "ha-735960" cluster
	* Restarting existing kvm2 VM for "ha-735960-m02" ...
	* Found network options:
	  - NO_PROXY=192.168.39.16
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 12:21:13.996326  652196 out.go:291] Setting OutFile to fd 1 ...
	I0701 12:21:13.996600  652196 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 12:21:13.996610  652196 out.go:304] Setting ErrFile to fd 2...
	I0701 12:21:13.996615  652196 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 12:21:13.996825  652196 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19166-630650/.minikube/bin
	I0701 12:21:13.997417  652196 out.go:298] Setting JSON to false
	I0701 12:21:13.998463  652196 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":7412,"bootTime":1719829062,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0701 12:21:13.998525  652196 start.go:139] virtualization: kvm guest
	I0701 12:21:14.000967  652196 out.go:177] * [ha-735960] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0701 12:21:14.002666  652196 out.go:177]   - MINIKUBE_LOCATION=19166
	I0701 12:21:14.002690  652196 notify.go:220] Checking for updates...
	I0701 12:21:14.005489  652196 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 12:21:14.006983  652196 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19166-630650/kubeconfig
	I0701 12:21:14.008350  652196 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19166-630650/.minikube
	I0701 12:21:14.009593  652196 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0701 12:21:14.011091  652196 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 12:21:14.012857  652196 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:21:14.012999  652196 driver.go:392] Setting default libvirt URI to qemu:///system
	I0701 12:21:14.013468  652196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:21:14.013542  652196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:21:14.028581  652196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35775
	I0701 12:21:14.028967  652196 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:21:14.029528  652196 main.go:141] libmachine: Using API Version  1
	I0701 12:21:14.029551  652196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:21:14.029916  652196 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:21:14.030116  652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:21:14.065038  652196 out.go:177] * Using the kvm2 driver based on existing profile
	I0701 12:21:14.066535  652196 start.go:297] selected driver: kvm2
	I0701 12:21:14.066551  652196 start.go:901] validating driver "kvm2" against &{Name:ha-735960 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.2 ClusterName:ha-735960 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.60 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:fal
se efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 12:21:14.066723  652196 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 12:21:14.067041  652196 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:21:14.067114  652196 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19166-630650/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0701 12:21:14.082191  652196 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0701 12:21:14.082920  652196 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 12:21:14.082959  652196 cni.go:84] Creating CNI manager for ""
	I0701 12:21:14.082966  652196 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0701 12:21:14.083026  652196 start.go:340] cluster config:
	{Name:ha-735960 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-735960 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.60 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false h
elm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 12:21:14.083142  652196 iso.go:125] acquiring lock: {Name:mk5c70910f61bc270c83609c48670eaf9d7e0602 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:21:14.086358  652196 out.go:177] * Starting "ha-735960" primary control-plane node in "ha-735960" cluster
	I0701 12:21:14.087757  652196 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 12:21:14.087794  652196 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19166-630650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0701 12:21:14.087805  652196 cache.go:56] Caching tarball of preloaded images
	I0701 12:21:14.087882  652196 preload.go:173] Found /home/jenkins/minikube-integration/19166-630650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0701 12:21:14.087892  652196 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0701 12:21:14.088044  652196 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
	I0701 12:21:14.088232  652196 start.go:360] acquireMachinesLock for ha-735960: {Name:mk43a6c0c0c15a237623df377ad65b82186c0ad8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:21:14.088271  652196 start.go:364] duration metric: took 21.615µs to acquireMachinesLock for "ha-735960"
	I0701 12:21:14.088285  652196 start.go:96] Skipping create...Using existing machine configuration
	I0701 12:21:14.088293  652196 fix.go:54] fixHost starting: 
	I0701 12:21:14.088547  652196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:21:14.088578  652196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:21:14.103070  652196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42935
	I0701 12:21:14.103508  652196 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:21:14.104025  652196 main.go:141] libmachine: Using API Version  1
	I0701 12:21:14.104050  652196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:21:14.104424  652196 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:21:14.104649  652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:21:14.104829  652196 main.go:141] libmachine: (ha-735960) Calling .GetState
	I0701 12:21:14.106608  652196 fix.go:112] recreateIfNeeded on ha-735960: state=Stopped err=<nil>
	I0701 12:21:14.106630  652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
	W0701 12:21:14.106790  652196 fix.go:138] unexpected machine state, will restart: <nil>
	I0701 12:21:14.108833  652196 out.go:177] * Restarting existing kvm2 VM for "ha-735960" ...
	I0701 12:21:14.110060  652196 main.go:141] libmachine: (ha-735960) Calling .Start
	I0701 12:21:14.110234  652196 main.go:141] libmachine: (ha-735960) Ensuring networks are active...
	I0701 12:21:14.110976  652196 main.go:141] libmachine: (ha-735960) Ensuring network default is active
	I0701 12:21:14.111299  652196 main.go:141] libmachine: (ha-735960) Ensuring network mk-ha-735960 is active
	I0701 12:21:14.111665  652196 main.go:141] libmachine: (ha-735960) Getting domain xml...
	I0701 12:21:14.112420  652196 main.go:141] libmachine: (ha-735960) Creating domain...
	I0701 12:21:15.307133  652196 main.go:141] libmachine: (ha-735960) Waiting to get IP...
	I0701 12:21:15.308062  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:15.308526  652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:21:15.308647  652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:15.308493  652224 retry.go:31] will retry after 239.111405ms: waiting for machine to come up
	I0701 12:21:15.549211  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:15.549648  652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:21:15.549679  652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:15.549597  652224 retry.go:31] will retry after 248.256131ms: waiting for machine to come up
	I0701 12:21:15.799054  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:15.799481  652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:21:15.799534  652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:15.799422  652224 retry.go:31] will retry after 380.468685ms: waiting for machine to come up
	I0701 12:21:16.181969  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:16.182432  652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:21:16.182634  652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:16.182540  652224 retry.go:31] will retry after 592.847587ms: waiting for machine to come up
	I0701 12:21:16.777393  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:16.777837  652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:21:16.777867  652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:16.777790  652224 retry.go:31] will retry after 639.749416ms: waiting for machine to come up
	I0701 12:21:17.419540  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:17.419941  652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:21:17.419965  652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:17.419916  652224 retry.go:31] will retry after 891.768613ms: waiting for machine to come up
	I0701 12:21:18.312967  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:18.313455  652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:21:18.313484  652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:18.313399  652224 retry.go:31] will retry after 1.112048412s: waiting for machine to come up
	I0701 12:21:19.427190  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:19.427624  652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:21:19.427655  652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:19.427568  652224 retry.go:31] will retry after 1.150138437s: waiting for machine to come up
	I0701 12:21:20.579868  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:20.580291  652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:21:20.580325  652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:20.580216  652224 retry.go:31] will retry after 1.129763596s: waiting for machine to come up
	I0701 12:21:21.711416  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:21.711892  652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:21:21.711924  652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:21.711831  652224 retry.go:31] will retry after 2.143074349s: waiting for machine to come up
	I0701 12:21:23.858081  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:23.858617  652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:21:23.858643  652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:23.858578  652224 retry.go:31] will retry after 2.436757856s: waiting for machine to come up
	I0701 12:21:26.297727  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:26.298302  652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:21:26.298352  652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:26.298269  652224 retry.go:31] will retry after 2.609229165s: waiting for machine to come up
	I0701 12:21:28.911224  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:28.911698  652196 main.go:141] libmachine: (ha-735960) Found IP for machine: 192.168.39.16
	I0701 12:21:28.911722  652196 main.go:141] libmachine: (ha-735960) Reserving static IP address...
	I0701 12:21:28.911731  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has current primary IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:28.912401  652196 main.go:141] libmachine: (ha-735960) Reserved static IP address: 192.168.39.16
	I0701 12:21:28.912425  652196 main.go:141] libmachine: (ha-735960) Waiting for SSH to be available...
	I0701 12:21:28.912468  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "ha-735960", mac: "52:54:00:6c:20:7c", ip: "192.168.39.16"} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:28.912492  652196 main.go:141] libmachine: (ha-735960) DBG | skip adding static IP to network mk-ha-735960 - found existing host DHCP lease matching {name: "ha-735960", mac: "52:54:00:6c:20:7c", ip: "192.168.39.16"}
	I0701 12:21:28.912507  652196 main.go:141] libmachine: (ha-735960) DBG | Getting to WaitForSSH function...
	I0701 12:21:28.914934  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:28.915448  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:28.915478  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:28.915627  652196 main.go:141] libmachine: (ha-735960) DBG | Using SSH client type: external
	I0701 12:21:28.915655  652196 main.go:141] libmachine: (ha-735960) DBG | Using SSH private key: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa (-rw-------)
	I0701 12:21:28.915680  652196 main.go:141] libmachine: (ha-735960) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.16 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0701 12:21:28.915698  652196 main.go:141] libmachine: (ha-735960) DBG | About to run SSH command:
	I0701 12:21:28.915730  652196 main.go:141] libmachine: (ha-735960) DBG | exit 0
	I0701 12:21:29.042314  652196 main.go:141] libmachine: (ha-735960) DBG | SSH cmd err, output: <nil>: 
	I0701 12:21:29.042747  652196 main.go:141] libmachine: (ha-735960) Calling .GetConfigRaw
	I0701 12:21:29.043414  652196 main.go:141] libmachine: (ha-735960) Calling .GetIP
	I0701 12:21:29.046291  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.046689  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:29.046714  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.046967  652196 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
	I0701 12:21:29.047187  652196 machine.go:94] provisionDockerMachine start ...
	I0701 12:21:29.047211  652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:21:29.047467  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:21:29.049524  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.049899  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:29.049924  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.050040  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:21:29.050240  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:29.050477  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:29.050669  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:21:29.050868  652196 main.go:141] libmachine: Using SSH client type: native
	I0701 12:21:29.051073  652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0701 12:21:29.051086  652196 main.go:141] libmachine: About to run SSH command:
	hostname
	I0701 12:21:29.166645  652196 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0701 12:21:29.166687  652196 main.go:141] libmachine: (ha-735960) Calling .GetMachineName
	I0701 12:21:29.166983  652196 buildroot.go:166] provisioning hostname "ha-735960"
	I0701 12:21:29.167013  652196 main.go:141] libmachine: (ha-735960) Calling .GetMachineName
	I0701 12:21:29.167232  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:21:29.169829  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.170228  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:29.170260  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.170403  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:21:29.170603  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:29.170773  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:29.170913  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:21:29.171082  652196 main.go:141] libmachine: Using SSH client type: native
	I0701 12:21:29.171259  652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0701 12:21:29.171270  652196 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-735960 && echo "ha-735960" | sudo tee /etc/hostname
	I0701 12:21:29.295697  652196 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-735960
	
	I0701 12:21:29.295728  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:21:29.298625  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.299014  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:29.299041  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.299233  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:21:29.299434  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:29.299641  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:29.299795  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:21:29.299954  652196 main.go:141] libmachine: Using SSH client type: native
	I0701 12:21:29.300149  652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0701 12:21:29.300171  652196 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-735960' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-735960/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-735960' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0701 12:21:29.418489  652196 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0701 12:21:29.418522  652196 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19166-630650/.minikube CaCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19166-630650/.minikube}
	I0701 12:21:29.418577  652196 buildroot.go:174] setting up certificates
	I0701 12:21:29.418593  652196 provision.go:84] configureAuth start
	I0701 12:21:29.418612  652196 main.go:141] libmachine: (ha-735960) Calling .GetMachineName
	I0701 12:21:29.418889  652196 main.go:141] libmachine: (ha-735960) Calling .GetIP
	I0701 12:21:29.421815  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.422238  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:29.422275  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.422477  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:21:29.424787  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.425187  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:29.425216  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.425427  652196 provision.go:143] copyHostCerts
	I0701 12:21:29.425466  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem
	I0701 12:21:29.425530  652196 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem, removing ...
	I0701 12:21:29.425542  652196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem
	I0701 12:21:29.425624  652196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem (1078 bytes)
	I0701 12:21:29.425732  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem
	I0701 12:21:29.425753  652196 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem, removing ...
	I0701 12:21:29.425758  652196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem
	I0701 12:21:29.425798  652196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem (1123 bytes)
	I0701 12:21:29.425856  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem
	I0701 12:21:29.425872  652196 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem, removing ...
	I0701 12:21:29.425877  652196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem
	I0701 12:21:29.425897  652196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem (1675 bytes)
	I0701 12:21:29.425958  652196 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem org=jenkins.ha-735960 san=[127.0.0.1 192.168.39.16 ha-735960 localhost minikube]
	I0701 12:21:29.592360  652196 provision.go:177] copyRemoteCerts
	I0701 12:21:29.592437  652196 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0701 12:21:29.592463  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:21:29.595489  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.595884  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:29.595908  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.596131  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:21:29.596356  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:29.596515  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:21:29.596646  652196 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa Username:docker}
	I0701 12:21:29.684124  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0701 12:21:29.684214  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0701 12:21:29.707185  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0701 12:21:29.707254  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0701 12:21:29.729605  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0701 12:21:29.729687  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0701 12:21:29.751505  652196 provision.go:87] duration metric: took 332.894756ms to configureAuth
	I0701 12:21:29.751536  652196 buildroot.go:189] setting minikube options for container-runtime
	I0701 12:21:29.751802  652196 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:21:29.751834  652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:21:29.752179  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:21:29.754903  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.755331  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:29.755367  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.755494  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:21:29.755709  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:29.755868  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:29.756016  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:21:29.756168  652196 main.go:141] libmachine: Using SSH client type: native
	I0701 12:21:29.756341  652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0701 12:21:29.756351  652196 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0701 12:21:29.867557  652196 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0701 12:21:29.867582  652196 buildroot.go:70] root file system type: tmpfs
	I0701 12:21:29.867738  652196 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0701 12:21:29.867768  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:21:29.870702  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.871111  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:29.871152  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.871294  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:21:29.871532  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:29.871806  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:29.871989  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:21:29.872177  652196 main.go:141] libmachine: Using SSH client type: native
	I0701 12:21:29.872347  652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0701 12:21:29.872410  652196 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0701 12:21:29.995623  652196 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0701 12:21:29.995671  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:21:29.998574  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.998969  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:29.999001  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.999184  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:21:29.999403  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:29.999598  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:29.999772  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:21:29.999916  652196 main.go:141] libmachine: Using SSH client type: native
	I0701 12:21:30.000093  652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0701 12:21:30.000109  652196 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0701 12:21:31.849411  652196 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0701 12:21:31.849452  652196 machine.go:97] duration metric: took 2.802248138s to provisionDockerMachine
	I0701 12:21:31.849473  652196 start.go:293] postStartSetup for "ha-735960" (driver="kvm2")
	I0701 12:21:31.849487  652196 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0701 12:21:31.849508  652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:21:31.849934  652196 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0701 12:21:31.849982  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:21:31.853029  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:31.853464  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:31.853494  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:31.853656  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:21:31.853877  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:31.854065  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:21:31.854242  652196 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa Username:docker}
	I0701 12:21:31.948096  652196 ssh_runner.go:195] Run: cat /etc/os-release
	I0701 12:21:31.952493  652196 info.go:137] Remote host: Buildroot 2023.02.9
	I0701 12:21:31.952522  652196 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-630650/.minikube/addons for local assets ...
	I0701 12:21:31.952580  652196 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-630650/.minikube/files for local assets ...
	I0701 12:21:31.952654  652196 filesync.go:149] local asset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> 6378542.pem in /etc/ssl/certs
	I0701 12:21:31.952664  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> /etc/ssl/certs/6378542.pem
	I0701 12:21:31.952750  652196 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0701 12:21:31.962034  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem --> /etc/ssl/certs/6378542.pem (1708 bytes)
	I0701 12:21:31.985898  652196 start.go:296] duration metric: took 136.407484ms for postStartSetup
	I0701 12:21:31.985953  652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:21:31.986287  652196 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0701 12:21:31.986316  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:21:31.988934  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:31.989328  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:31.989359  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:31.989497  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:21:31.989724  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:31.989863  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:21:31.990038  652196 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa Username:docker}
	I0701 12:21:32.076710  652196 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0701 12:21:32.076807  652196 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0701 12:21:32.133792  652196 fix.go:56] duration metric: took 18.045488816s for fixHost
	I0701 12:21:32.133863  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:21:32.136703  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:32.137078  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:32.137110  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:32.137321  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:21:32.137591  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:32.137793  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:32.137963  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:21:32.138201  652196 main.go:141] libmachine: Using SSH client type: native
	I0701 12:21:32.138518  652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0701 12:21:32.138541  652196 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0701 12:21:32.254973  652196 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719836492.215186729
	
	I0701 12:21:32.255001  652196 fix.go:216] guest clock: 1719836492.215186729
	I0701 12:21:32.255007  652196 fix.go:229] Guest: 2024-07-01 12:21:32.215186729 +0000 UTC Remote: 2024-07-01 12:21:32.133836118 +0000 UTC m=+18.172225533 (delta=81.350611ms)
	I0701 12:21:32.255027  652196 fix.go:200] guest clock delta is within tolerance: 81.350611ms
	I0701 12:21:32.255032  652196 start.go:83] releasing machines lock for "ha-735960", held for 18.166751927s
	I0701 12:21:32.255050  652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:21:32.255338  652196 main.go:141] libmachine: (ha-735960) Calling .GetIP
	I0701 12:21:32.258091  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:32.258459  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:32.258481  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:32.258679  652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:21:32.259224  652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:21:32.259383  652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:21:32.259520  652196 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0701 12:21:32.259564  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:21:32.259693  652196 ssh_runner.go:195] Run: cat /version.json
	I0701 12:21:32.259718  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:21:32.262127  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:32.262481  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:32.262518  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:32.262538  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:32.262653  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:21:32.262845  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:32.263031  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:21:32.263054  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:32.263074  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:32.263215  652196 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa Username:docker}
	I0701 12:21:32.263229  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:21:32.263398  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:32.263547  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:21:32.263699  652196 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa Username:docker}
	I0701 12:21:32.343012  652196 ssh_runner.go:195] Run: systemctl --version
	I0701 12:21:32.428409  652196 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0701 12:21:32.433742  652196 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0701 12:21:32.433815  652196 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0701 12:21:32.449052  652196 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0701 12:21:32.449087  652196 start.go:494] detecting cgroup driver to use...
	I0701 12:21:32.449338  652196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 12:21:32.471651  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0701 12:21:32.481832  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0701 12:21:32.491470  652196 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0701 12:21:32.491548  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0701 12:21:32.501229  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0701 12:21:32.511119  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0701 12:21:32.520826  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0701 12:21:32.530559  652196 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0701 12:21:32.542109  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0701 12:21:32.551821  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0701 12:21:32.561403  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0701 12:21:32.571068  652196 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0701 12:21:32.579813  652196 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0701 12:21:32.588595  652196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:21:32.705377  652196 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0701 12:21:32.724169  652196 start.go:494] detecting cgroup driver to use...
	I0701 12:21:32.724285  652196 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0701 12:21:32.739050  652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 12:21:32.753169  652196 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0701 12:21:32.769805  652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 12:21:32.783750  652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 12:21:32.797509  652196 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0701 12:21:32.821510  652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 12:21:32.835901  652196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 12:21:32.854192  652196 ssh_runner.go:195] Run: which cri-dockerd
	I0701 12:21:32.858039  652196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0701 12:21:32.867652  652196 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0701 12:21:32.884216  652196 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0701 12:21:33.001636  652196 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0701 12:21:33.121229  652196 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0701 12:21:33.121419  652196 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0701 12:21:33.138482  652196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:21:33.262395  652196 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0701 12:21:35.714549  652196 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.452099351s)
	I0701 12:21:35.714642  652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0701 12:21:35.727946  652196 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0701 12:21:35.744089  652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0701 12:21:35.757426  652196 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0701 12:21:35.868089  652196 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0701 12:21:35.989857  652196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:21:36.121343  652196 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0701 12:21:36.138520  652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0701 12:21:36.152026  652196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:21:36.271312  652196 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0701 12:21:36.351567  652196 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0701 12:21:36.351668  652196 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0701 12:21:36.357143  652196 start.go:562] Will wait 60s for crictl version
	I0701 12:21:36.357212  652196 ssh_runner.go:195] Run: which crictl
	I0701 12:21:36.361384  652196 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0701 12:21:36.400372  652196 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.1
	RuntimeApiVersion:  v1
	I0701 12:21:36.400446  652196 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0701 12:21:36.427941  652196 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0701 12:21:36.456620  652196 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.1 ...
	I0701 12:21:36.456687  652196 main.go:141] libmachine: (ha-735960) Calling .GetIP
	I0701 12:21:36.459384  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:36.459752  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:36.459781  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:36.459970  652196 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0701 12:21:36.463956  652196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 12:21:36.476676  652196 kubeadm.go:877] updating cluster {Name:ha-735960 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cl
usterName:ha-735960 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.60 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:fa
lse freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0701 12:21:36.476851  652196 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 12:21:36.476914  652196 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0701 12:21:36.493466  652196 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	kindest/kindnetd:v20240513-cd2ac642
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0701 12:21:36.493530  652196 docker.go:615] Images already preloaded, skipping extraction
	I0701 12:21:36.493620  652196 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0701 12:21:36.510908  652196 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	kindest/kindnetd:v20240513-cd2ac642
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0701 12:21:36.510939  652196 cache_images.go:84] Images are preloaded, skipping loading
	I0701 12:21:36.510952  652196 kubeadm.go:928] updating node { 192.168.39.16 8443 v1.30.2 docker true true} ...
	I0701 12:21:36.511079  652196 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-735960 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.16
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-735960 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0701 12:21:36.511139  652196 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0701 12:21:36.536408  652196 cni.go:84] Creating CNI manager for ""
	I0701 12:21:36.536430  652196 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0701 12:21:36.536441  652196 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0701 12:21:36.536470  652196 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.16 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-735960 NodeName:ha-735960 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.16"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.16 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0701 12:21:36.536633  652196 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.16
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-735960"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.16
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.16"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0701 12:21:36.536656  652196 kube-vip.go:115] generating kube-vip config ...
	I0701 12:21:36.536698  652196 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0701 12:21:36.551906  652196 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0701 12:21:36.552024  652196 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0701 12:21:36.552078  652196 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0701 12:21:36.561989  652196 binaries.go:44] Found k8s binaries, skipping transfer
	I0701 12:21:36.562059  652196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0701 12:21:36.571281  652196 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0701 12:21:36.587480  652196 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0701 12:21:36.603596  652196 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0701 12:21:36.621063  652196 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0701 12:21:36.637192  652196 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0701 12:21:36.640909  652196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 12:21:36.652690  652196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:21:36.768142  652196 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0701 12:21:36.786625  652196 certs.go:68] Setting up /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960 for IP: 192.168.39.16
	I0701 12:21:36.786655  652196 certs.go:194] generating shared ca certs ...
	I0701 12:21:36.786676  652196 certs.go:226] acquiring lock for ca certs: {Name:mk34e166bfd069e523b2325e14d1812c523bff53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:21:36.786854  652196 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key
	I0701 12:21:36.786904  652196 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key
	I0701 12:21:36.786915  652196 certs.go:256] generating profile certs ...
	I0701 12:21:36.787017  652196 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/client.key
	I0701 12:21:36.787046  652196 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key.5c21f4af
	I0701 12:21:36.787059  652196 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt.5c21f4af with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.16 192.168.39.86 192.168.39.97 192.168.39.254]
	I0701 12:21:37.059263  652196 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt.5c21f4af ...
	I0701 12:21:37.059305  652196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt.5c21f4af: {Name:mk1be9dc4667506ac6fdcfb1e313edd1292fe7f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:21:37.059483  652196 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key.5c21f4af ...
	I0701 12:21:37.059496  652196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key.5c21f4af: {Name:mkf9220e489bd04f035dab270c790bb3448ca6be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:21:37.059596  652196 certs.go:381] copying /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt.5c21f4af -> /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt
	I0701 12:21:37.059809  652196 certs.go:385] copying /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key.5c21f4af -> /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key
	I0701 12:21:37.059969  652196 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.key
	I0701 12:21:37.059987  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0701 12:21:37.060000  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0701 12:21:37.060014  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0701 12:21:37.060026  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0701 12:21:37.060038  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0701 12:21:37.060054  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0701 12:21:37.060066  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0701 12:21:37.060077  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0701 12:21:37.060165  652196 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem (1338 bytes)
	W0701 12:21:37.060197  652196 certs.go:480] ignoring /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854_empty.pem, impossibly tiny 0 bytes
	I0701 12:21:37.060207  652196 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem (1675 bytes)
	I0701 12:21:37.060228  652196 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem (1078 bytes)
	I0701 12:21:37.060248  652196 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem (1123 bytes)
	I0701 12:21:37.060270  652196 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem (1675 bytes)
	I0701 12:21:37.060305  652196 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem (1708 bytes)
	I0701 12:21:37.060331  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem -> /usr/share/ca-certificates/637854.pem
	I0701 12:21:37.060347  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> /usr/share/ca-certificates/6378542.pem
	I0701 12:21:37.060359  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:21:37.061045  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0701 12:21:37.111708  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0701 12:21:37.168649  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0701 12:21:37.204675  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0701 12:21:37.241167  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0701 12:21:37.265225  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0701 12:21:37.288613  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0701 12:21:37.312645  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0701 12:21:37.337494  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem --> /usr/share/ca-certificates/637854.pem (1338 bytes)
	I0701 12:21:37.361044  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem --> /usr/share/ca-certificates/6378542.pem (1708 bytes)
	I0701 12:21:37.385424  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0701 12:21:37.409054  652196 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0701 12:21:37.426602  652196 ssh_runner.go:195] Run: openssl version
	I0701 12:21:37.432129  652196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0701 12:21:37.442695  652196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:21:37.447331  652196 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  1 12:05 /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:21:37.447415  652196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:21:37.453215  652196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0701 12:21:37.464086  652196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/637854.pem && ln -fs /usr/share/ca-certificates/637854.pem /etc/ssl/certs/637854.pem"
	I0701 12:21:37.474527  652196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/637854.pem
	I0701 12:21:37.479057  652196 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  1 12:11 /usr/share/ca-certificates/637854.pem
	I0701 12:21:37.479123  652196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/637854.pem
	I0701 12:21:37.484641  652196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/637854.pem /etc/ssl/certs/51391683.0"
	I0701 12:21:37.495175  652196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6378542.pem && ln -fs /usr/share/ca-certificates/6378542.pem /etc/ssl/certs/6378542.pem"
	I0701 12:21:37.505961  652196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6378542.pem
	I0701 12:21:37.510286  652196 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  1 12:11 /usr/share/ca-certificates/6378542.pem
	I0701 12:21:37.510365  652196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6378542.pem
	I0701 12:21:37.516124  652196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6378542.pem /etc/ssl/certs/3ec20f2e.0"
	I0701 12:21:37.527154  652196 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0701 12:21:37.532024  652196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0701 12:21:37.538145  652196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0701 12:21:37.544280  652196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0701 12:21:37.550448  652196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0701 12:21:37.556356  652196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0701 12:21:37.562174  652196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0701 12:21:37.568144  652196 kubeadm.go:391] StartCluster: {Name:ha-735960 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-735960 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.60 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false
freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 12:21:37.568362  652196 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0701 12:21:37.586457  652196 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0701 12:21:37.596129  652196 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0701 12:21:37.596158  652196 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0701 12:21:37.596164  652196 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0701 12:21:37.596237  652196 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0701 12:21:37.605715  652196 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0701 12:21:37.606193  652196 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-735960" does not appear in /home/jenkins/minikube-integration/19166-630650/kubeconfig
	I0701 12:21:37.606354  652196 kubeconfig.go:62] /home/jenkins/minikube-integration/19166-630650/kubeconfig needs updating (will repair): [kubeconfig missing "ha-735960" cluster setting kubeconfig missing "ha-735960" context setting]
	I0701 12:21:37.606708  652196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19166-630650/kubeconfig: {Name:mke3ef9d019eff4edd273b00c416fd77ed009242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:21:37.607135  652196 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19166-630650/kubeconfig
	I0701 12:21:37.607365  652196 kapi.go:59] client config for ha-735960: &rest.Config{Host:"https://192.168.39.16:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/client.crt", KeyFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/client.key", CAFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(ni
l)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfbb80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0701 12:21:37.607752  652196 cert_rotation.go:137] Starting client certificate rotation controller
	I0701 12:21:37.608047  652196 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0701 12:21:37.617685  652196 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.16
	I0701 12:21:37.617715  652196 kubeadm.go:591] duration metric: took 21.544408ms to restartPrimaryControlPlane
	I0701 12:21:37.617725  652196 kubeadm.go:393] duration metric: took 49.593354ms to StartCluster
	I0701 12:21:37.617748  652196 settings.go:142] acquiring lock: {Name:mk6f7c85ea77a73ff0ac851454721f2e6e309153 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:21:37.617834  652196 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19166-630650/kubeconfig
	I0701 12:21:37.618535  652196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19166-630650/kubeconfig: {Name:mke3ef9d019eff4edd273b00c416fd77ed009242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:21:37.618754  652196 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:21:37.618777  652196 start.go:240] waiting for startup goroutines ...
	I0701 12:21:37.618792  652196 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0701 12:21:37.619028  652196 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:21:37.621683  652196 out.go:177] * Enabled addons: 
	I0701 12:21:37.622979  652196 addons.go:510] duration metric: took 4.192015ms for enable addons: enabled=[]
	I0701 12:21:37.623011  652196 start.go:245] waiting for cluster config update ...
	I0701 12:21:37.623019  652196 start.go:254] writing updated cluster config ...
	I0701 12:21:37.624600  652196 out.go:177] 
	I0701 12:21:37.626023  652196 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:21:37.626124  652196 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
	I0701 12:21:37.627745  652196 out.go:177] * Starting "ha-735960-m02" control-plane node in "ha-735960" cluster
	I0701 12:21:37.628946  652196 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 12:21:37.628969  652196 cache.go:56] Caching tarball of preloaded images
	I0701 12:21:37.629060  652196 preload.go:173] Found /home/jenkins/minikube-integration/19166-630650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0701 12:21:37.629072  652196 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0701 12:21:37.629161  652196 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
	I0701 12:21:37.629353  652196 start.go:360] acquireMachinesLock for ha-735960-m02: {Name:mk43a6c0c0c15a237623df377ad65b82186c0ad8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:21:37.629411  652196 start.go:364] duration metric: took 31.79µs to acquireMachinesLock for "ha-735960-m02"
	I0701 12:21:37.629427  652196 start.go:96] Skipping create...Using existing machine configuration
	I0701 12:21:37.629440  652196 fix.go:54] fixHost starting: m02
	I0701 12:21:37.629698  652196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:21:37.629747  652196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:21:37.644981  652196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41663
	I0701 12:21:37.645473  652196 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:21:37.645947  652196 main.go:141] libmachine: Using API Version  1
	I0701 12:21:37.645969  652196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:21:37.646284  652196 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:21:37.646523  652196 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:21:37.646646  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetState
	I0701 12:21:37.648195  652196 fix.go:112] recreateIfNeeded on ha-735960-m02: state=Stopped err=<nil>
	I0701 12:21:37.648228  652196 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	W0701 12:21:37.648406  652196 fix.go:138] unexpected machine state, will restart: <nil>
	I0701 12:21:37.650489  652196 out.go:177] * Restarting existing kvm2 VM for "ha-735960-m02" ...
	I0701 12:21:37.651975  652196 main.go:141] libmachine: (ha-735960-m02) Calling .Start
	I0701 12:21:37.652186  652196 main.go:141] libmachine: (ha-735960-m02) Ensuring networks are active...
	I0701 12:21:37.652916  652196 main.go:141] libmachine: (ha-735960-m02) Ensuring network default is active
	I0701 12:21:37.653282  652196 main.go:141] libmachine: (ha-735960-m02) Ensuring network mk-ha-735960 is active
	I0701 12:21:37.653613  652196 main.go:141] libmachine: (ha-735960-m02) Getting domain xml...
	I0701 12:21:37.654254  652196 main.go:141] libmachine: (ha-735960-m02) Creating domain...
	I0701 12:21:38.852369  652196 main.go:141] libmachine: (ha-735960-m02) Waiting to get IP...
	I0701 12:21:38.853358  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:38.853762  652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:21:38.853832  652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:38.853747  652384 retry.go:31] will retry after 295.798088ms: waiting for machine to come up
	I0701 12:21:39.151332  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:39.151886  652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:21:39.151912  652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:39.151845  652384 retry.go:31] will retry after 255.18729ms: waiting for machine to come up
	I0701 12:21:39.408310  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:39.408739  652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:21:39.408792  652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:39.408689  652384 retry.go:31] will retry after 457.740061ms: waiting for machine to come up
	I0701 12:21:39.868295  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:39.868702  652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:21:39.868736  652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:39.868629  652384 retry.go:31] will retry after 548.674851ms: waiting for machine to come up
	I0701 12:21:40.419597  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:40.420069  652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:21:40.420100  652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:40.420009  652384 retry.go:31] will retry after 755.113146ms: waiting for machine to come up
	I0701 12:21:41.176960  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:41.177380  652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:21:41.177429  652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:41.177309  652384 retry.go:31] will retry after 739.288718ms: waiting for machine to come up
	I0701 12:21:41.918305  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:41.918853  652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:21:41.918884  652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:41.918789  652384 retry.go:31] will retry after 722.041404ms: waiting for machine to come up
	I0701 12:21:42.642704  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:42.643188  652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:21:42.643219  652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:42.643113  652384 retry.go:31] will retry after 1.139279839s: waiting for machine to come up
	I0701 12:21:43.784719  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:43.785159  652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:21:43.785193  652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:43.785114  652384 retry.go:31] will retry after 1.276779849s: waiting for machine to come up
	I0701 12:21:45.063522  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:45.064026  652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:21:45.064058  652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:45.063969  652384 retry.go:31] will retry after 2.284492799s: waiting for machine to come up
	I0701 12:21:47.351530  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:47.352076  652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:21:47.352113  652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:47.351988  652384 retry.go:31] will retry after 2.171521184s: waiting for machine to come up
	I0701 12:21:49.526162  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:49.526566  652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:21:49.526590  652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:49.526523  652384 retry.go:31] will retry after 3.533181759s: waiting for machine to come up
	I0701 12:21:53.061482  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.062025  652196 main.go:141] libmachine: (ha-735960-m02) Found IP for machine: 192.168.39.86
	I0701 12:21:53.062048  652196 main.go:141] libmachine: (ha-735960-m02) Reserving static IP address...
	I0701 12:21:53.062060  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has current primary IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.062473  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "ha-735960-m02", mac: "52:54:00:0b:2f:ce", ip: "192.168.39.86"} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:53.062504  652196 main.go:141] libmachine: (ha-735960-m02) DBG | skip adding static IP to network mk-ha-735960 - found existing host DHCP lease matching {name: "ha-735960-m02", mac: "52:54:00:0b:2f:ce", ip: "192.168.39.86"}
	I0701 12:21:53.062534  652196 main.go:141] libmachine: (ha-735960-m02) Reserved static IP address: 192.168.39.86
	I0701 12:21:53.062554  652196 main.go:141] libmachine: (ha-735960-m02) Waiting for SSH to be available...
	I0701 12:21:53.062566  652196 main.go:141] libmachine: (ha-735960-m02) DBG | Getting to WaitForSSH function...
	I0701 12:21:53.064461  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.064796  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:53.064828  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.064893  652196 main.go:141] libmachine: (ha-735960-m02) DBG | Using SSH client type: external
	I0701 12:21:53.064938  652196 main.go:141] libmachine: (ha-735960-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa (-rw-------)
	I0701 12:21:53.064965  652196 main.go:141] libmachine: (ha-735960-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.86 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0701 12:21:53.064981  652196 main.go:141] libmachine: (ha-735960-m02) DBG | About to run SSH command:
	I0701 12:21:53.065000  652196 main.go:141] libmachine: (ha-735960-m02) DBG | exit 0
	I0701 12:21:53.190266  652196 main.go:141] libmachine: (ha-735960-m02) DBG | SSH cmd err, output: <nil>: 
	I0701 12:21:53.190636  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetConfigRaw
	I0701 12:21:53.191272  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetIP
	I0701 12:21:53.193658  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.193994  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:53.194027  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.194274  652196 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
	I0701 12:21:53.194544  652196 machine.go:94] provisionDockerMachine start ...
	I0701 12:21:53.194562  652196 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:21:53.194814  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:21:53.196894  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.197262  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:53.197291  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.197414  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:21:53.197654  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:53.197829  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:53.198021  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:21:53.198185  652196 main.go:141] libmachine: Using SSH client type: native
	I0701 12:21:53.198432  652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0701 12:21:53.198448  652196 main.go:141] libmachine: About to run SSH command:
	hostname
	I0701 12:21:53.306480  652196 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0701 12:21:53.306526  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetMachineName
	I0701 12:21:53.306839  652196 buildroot.go:166] provisioning hostname "ha-735960-m02"
	I0701 12:21:53.306870  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetMachineName
	I0701 12:21:53.307063  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:21:53.309645  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.310086  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:53.310116  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.310307  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:21:53.310514  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:53.310689  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:53.310820  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:21:53.310997  652196 main.go:141] libmachine: Using SSH client type: native
	I0701 12:21:53.311210  652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0701 12:21:53.311225  652196 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-735960-m02 && echo "ha-735960-m02" | sudo tee /etc/hostname
	I0701 12:21:53.434956  652196 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-735960-m02
	
	I0701 12:21:53.434992  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:21:53.437612  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.438016  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:53.438040  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.438190  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:21:53.438418  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:53.438601  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:53.438768  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:21:53.438926  652196 main.go:141] libmachine: Using SSH client type: native
	I0701 12:21:53.439106  652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0701 12:21:53.439128  652196 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-735960-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-735960-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-735960-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0701 12:21:53.559115  652196 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0701 12:21:53.559146  652196 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19166-630650/.minikube CaCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19166-630650/.minikube}
	I0701 12:21:53.559163  652196 buildroot.go:174] setting up certificates
	I0701 12:21:53.559174  652196 provision.go:84] configureAuth start
	I0701 12:21:53.559186  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetMachineName
	I0701 12:21:53.559514  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetIP
	I0701 12:21:53.562119  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.562516  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:53.562550  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.562753  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:21:53.564741  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.565063  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:53.565082  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.565233  652196 provision.go:143] copyHostCerts
	I0701 12:21:53.565266  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem
	I0701 12:21:53.565309  652196 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem, removing ...
	I0701 12:21:53.565318  652196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem
	I0701 12:21:53.565379  652196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem (1078 bytes)
	I0701 12:21:53.565450  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem
	I0701 12:21:53.565468  652196 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem, removing ...
	I0701 12:21:53.565474  652196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem
	I0701 12:21:53.565492  652196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem (1123 bytes)
	I0701 12:21:53.565533  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem
	I0701 12:21:53.565549  652196 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem, removing ...
	I0701 12:21:53.565555  652196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem
	I0701 12:21:53.565570  652196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem (1675 bytes)
	I0701 12:21:53.565618  652196 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem org=jenkins.ha-735960-m02 san=[127.0.0.1 192.168.39.86 ha-735960-m02 localhost minikube]
	I0701 12:21:53.749696  652196 provision.go:177] copyRemoteCerts
	I0701 12:21:53.749755  652196 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0701 12:21:53.749780  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:21:53.752460  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.752780  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:53.752813  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.752952  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:21:53.753159  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:53.753385  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:21:53.753547  652196 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa Username:docker}
	I0701 12:21:53.835990  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0701 12:21:53.836060  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0701 12:21:53.858665  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0701 12:21:53.858753  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0701 12:21:53.880281  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0701 12:21:53.880367  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0701 12:21:53.902677  652196 provision.go:87] duration metric: took 343.48703ms to configureAuth
	I0701 12:21:53.902709  652196 buildroot.go:189] setting minikube options for container-runtime
	I0701 12:21:53.903020  652196 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:21:53.903053  652196 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:21:53.903351  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:21:53.905929  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.906189  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:53.906216  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.906438  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:21:53.906667  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:53.906826  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:53.906966  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:21:53.907119  652196 main.go:141] libmachine: Using SSH client type: native
	I0701 12:21:53.907282  652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0701 12:21:53.907294  652196 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0701 12:21:54.019474  652196 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0701 12:21:54.019501  652196 buildroot.go:70] root file system type: tmpfs
	I0701 12:21:54.019656  652196 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0701 12:21:54.019681  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:21:54.022816  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:54.023184  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:54.023208  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:54.023371  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:21:54.023579  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:54.023787  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:54.023946  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:21:54.024146  652196 main.go:141] libmachine: Using SSH client type: native
	I0701 12:21:54.024319  652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0701 12:21:54.024384  652196 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.39.16"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0701 12:21:54.147740  652196 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.39.16
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0701 12:21:54.147778  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:21:54.150547  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:54.151173  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:54.151208  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:54.151345  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:21:54.151561  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:54.151771  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:54.151918  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:21:54.152095  652196 main.go:141] libmachine: Using SSH client type: native
	I0701 12:21:54.152266  652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0701 12:21:54.152281  652196 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0701 12:21:56.028628  652196 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0701 12:21:56.028682  652196 machine.go:97] duration metric: took 2.834118436s to provisionDockerMachine
	I0701 12:21:56.028701  652196 start.go:293] postStartSetup for "ha-735960-m02" (driver="kvm2")
	I0701 12:21:56.028716  652196 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0701 12:21:56.028738  652196 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:21:56.029099  652196 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0701 12:21:56.029132  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:21:56.031882  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:56.032264  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:56.032289  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:56.032433  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:21:56.032608  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:56.032817  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:21:56.032971  652196 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa Username:docker}
	I0701 12:21:56.117309  652196 ssh_runner.go:195] Run: cat /etc/os-release
	I0701 12:21:56.121231  652196 info.go:137] Remote host: Buildroot 2023.02.9
	I0701 12:21:56.121263  652196 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-630650/.minikube/addons for local assets ...
	I0701 12:21:56.121324  652196 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-630650/.minikube/files for local assets ...
	I0701 12:21:56.121391  652196 filesync.go:149] local asset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> 6378542.pem in /etc/ssl/certs
	I0701 12:21:56.121402  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> /etc/ssl/certs/6378542.pem
	I0701 12:21:56.121478  652196 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0701 12:21:56.130302  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem --> /etc/ssl/certs/6378542.pem (1708 bytes)
	I0701 12:21:56.152776  652196 start.go:296] duration metric: took 124.058691ms for postStartSetup
	I0701 12:21:56.152821  652196 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:21:56.153142  652196 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0701 12:21:56.153170  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:21:56.155689  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:56.156094  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:56.156120  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:56.156332  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:21:56.156555  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:56.156727  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:21:56.156917  652196 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa Username:docker}
	I0701 12:21:56.240391  652196 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0701 12:21:56.240454  652196 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0701 12:21:56.280843  652196 fix.go:56] duration metric: took 18.651393475s for fixHost
	I0701 12:21:56.280895  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:21:56.283268  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:56.283590  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:56.283617  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:56.283860  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:21:56.284107  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:56.284307  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:56.284501  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:21:56.284686  652196 main.go:141] libmachine: Using SSH client type: native
	I0701 12:21:56.284888  652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0701 12:21:56.284903  652196 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0701 12:21:56.398873  652196 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719836516.359963406
	
	I0701 12:21:56.398893  652196 fix.go:216] guest clock: 1719836516.359963406
	I0701 12:21:56.398901  652196 fix.go:229] Guest: 2024-07-01 12:21:56.359963406 +0000 UTC Remote: 2024-07-01 12:21:56.280872467 +0000 UTC m=+42.319261894 (delta=79.090939ms)
	I0701 12:21:56.398919  652196 fix.go:200] guest clock delta is within tolerance: 79.090939ms
	I0701 12:21:56.398924  652196 start.go:83] releasing machines lock for "ha-735960-m02", held for 18.769503298s
	I0701 12:21:56.398940  652196 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:21:56.399198  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetIP
	I0701 12:21:56.401982  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:56.402404  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:56.402436  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:56.404680  652196 out.go:177] * Found network options:
	I0701 12:21:56.406167  652196 out.go:177]   - NO_PROXY=192.168.39.16
	W0701 12:21:56.407620  652196 proxy.go:119] fail to check proxy env: Error ip not in block
	I0701 12:21:56.407664  652196 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:21:56.408285  652196 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:21:56.408498  652196 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:21:56.408606  652196 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0701 12:21:56.408647  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	W0701 12:21:56.408741  652196 proxy.go:119] fail to check proxy env: Error ip not in block
	I0701 12:21:56.408826  652196 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0701 12:21:56.408849  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:21:56.411170  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:56.411559  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:56.411598  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:56.411651  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:56.411933  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:21:56.412130  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:56.412221  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:56.412247  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:56.412295  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:21:56.412519  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:21:56.412508  652196 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa Username:docker}
	I0701 12:21:56.412720  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:56.412871  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:21:56.412987  652196 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa Username:docker}
	W0701 12:21:56.492511  652196 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0701 12:21:56.492595  652196 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0701 12:21:56.515270  652196 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0701 12:21:56.515305  652196 start.go:494] detecting cgroup driver to use...
	I0701 12:21:56.515419  652196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 12:21:56.549004  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0701 12:21:56.560711  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0701 12:21:56.578763  652196 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0701 12:21:56.578832  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0701 12:21:56.589742  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0701 12:21:56.606645  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0701 12:21:56.620036  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0701 12:21:56.632033  652196 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0701 12:21:56.642458  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0701 12:21:56.653078  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0701 12:21:56.663035  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0701 12:21:56.673203  652196 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0701 12:21:56.682348  652196 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0701 12:21:56.691388  652196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:21:56.798709  652196 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0701 12:21:56.821386  652196 start.go:494] detecting cgroup driver to use...
	I0701 12:21:56.821493  652196 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0701 12:21:56.841303  652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 12:21:56.857934  652196 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0701 12:21:56.877318  652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 12:21:56.889777  652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 12:21:56.901844  652196 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0701 12:21:56.927595  652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 12:21:56.940849  652196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 12:21:56.958116  652196 ssh_runner.go:195] Run: which cri-dockerd
	I0701 12:21:56.961664  652196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0701 12:21:56.969985  652196 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0701 12:21:56.985048  652196 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0701 12:21:57.096072  652196 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0701 12:21:57.211289  652196 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0701 12:21:57.211354  652196 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0701 12:21:57.227069  652196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:21:57.341292  652196 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0701 12:22:58.423195  652196 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.08185799s)
	I0701 12:22:58.423268  652196 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0701 12:22:58.444321  652196 out.go:177] 
	W0701 12:22:58.445678  652196 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 01 12:21:54 ha-735960-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 01 12:21:54 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:54.524329635Z" level=info msg="Starting up"
	Jul 01 12:21:54 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:54.525054987Z" level=info msg="containerd not running, starting managed containerd"
	Jul 01 12:21:54 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:54.525787354Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=513
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.553695593Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.572290393Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.572432449Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.572518940Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.572558429Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.572981597Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.573093539Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.573355911Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.573425452Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.573469593Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.573505057Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.573782642Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.574848351Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.576951334Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.577031827Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.577253828Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.577304329Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.577551634Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.577624370Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.577665230Z" level=info msg="metadata content store policy set" policy=shared
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.580979416Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.581128476Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.581284824Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.581371031Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.581432559Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.581524784Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.581996275Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582118070Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582162131Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582245548Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582319648Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582368655Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582407448Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582445279Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582484550Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582521928Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582558472Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582601035Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582656126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582693985Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582741537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582779033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582815513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582853076Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582892671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582938669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582980248Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583032987Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583083364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583122445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583161506Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583262727Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583333396Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583373579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583414811Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583520612Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583751718Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583800626Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583838317Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583874340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583912430Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583991424Z" level=info msg="NRI interface is disabled by configuration."
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.584364167Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.584467963Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.584654486Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.584785754Z" level=info msg="containerd successfully booted in 0.032655s"
	Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.555699119Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.620790434Z" level=info msg="Loading containers: start."
	Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.813021303Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.888534738Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.940299653Z" level=info msg="Loading containers: done."
	Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.956534314Z" level=info msg="Docker daemon" commit=ff1e2c0 containerd-snapshotter=false storage-driver=overlay2 version=27.0.1
	Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.956851438Z" level=info msg="Daemon has completed initialization"
	Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.988054435Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.988129188Z" level=info msg="API listen on [::]:2376"
	Jul 01 12:21:55 ha-735960-m02 systemd[1]: Started Docker Application Container Engine.
	Jul 01 12:21:57 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:57.316115209Z" level=info msg="Processing signal 'terminated'"
	Jul 01 12:21:57 ha-735960-m02 systemd[1]: Stopping Docker Application Container Engine...
	Jul 01 12:21:57 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:57.317321834Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 01 12:21:57 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:57.317386191Z" level=info msg="Daemon shutdown complete"
	Jul 01 12:21:57 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:57.317447382Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 01 12:21:57 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:57.317464543Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 01 12:21:58 ha-735960-m02 systemd[1]: docker.service: Deactivated successfully.
	Jul 01 12:21:58 ha-735960-m02 systemd[1]: Stopped Docker Application Container Engine.
	Jul 01 12:21:58 ha-735960-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 01 12:21:58 ha-735960-m02 dockerd[1188]: time="2024-07-01T12:21:58.364754006Z" level=info msg="Starting up"
	Jul 01 12:22:58 ha-735960-m02 dockerd[1188]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 01 12:22:58 ha-735960-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 01 12:22:58 ha-735960-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 01 12:22:58 ha-735960-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 01 12:21:54 ha-735960-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 01 12:21:54 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:54.524329635Z" level=info msg="Starting up"
	Jul 01 12:21:54 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:54.525054987Z" level=info msg="containerd not running, starting managed containerd"
	Jul 01 12:21:54 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:54.525787354Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=513
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.553695593Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.572290393Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.572432449Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.572518940Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.572558429Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.572981597Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.573093539Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.573355911Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.573425452Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.573469593Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.573505057Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.573782642Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.574848351Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.576951334Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.577031827Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.577253828Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.577304329Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.577551634Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.577624370Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.577665230Z" level=info msg="metadata content store policy set" policy=shared
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.580979416Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.581128476Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.581284824Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.581371031Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.581432559Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.581524784Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.581996275Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582118070Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582162131Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582245548Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582319648Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582368655Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582407448Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582445279Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582484550Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582521928Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582558472Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582601035Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582656126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582693985Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582741537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582779033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582815513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582853076Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582892671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582938669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582980248Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583032987Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583083364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583122445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583161506Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583262727Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583333396Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583373579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583414811Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583520612Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583751718Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583800626Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583838317Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583874340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583912430Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583991424Z" level=info msg="NRI interface is disabled by configuration."
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.584364167Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.584467963Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.584654486Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.584785754Z" level=info msg="containerd successfully booted in 0.032655s"
	Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.555699119Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.620790434Z" level=info msg="Loading containers: start."
	Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.813021303Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.888534738Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.940299653Z" level=info msg="Loading containers: done."
	Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.956534314Z" level=info msg="Docker daemon" commit=ff1e2c0 containerd-snapshotter=false storage-driver=overlay2 version=27.0.1
	Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.956851438Z" level=info msg="Daemon has completed initialization"
	Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.988054435Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.988129188Z" level=info msg="API listen on [::]:2376"
	Jul 01 12:21:55 ha-735960-m02 systemd[1]: Started Docker Application Container Engine.
	Jul 01 12:21:57 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:57.316115209Z" level=info msg="Processing signal 'terminated'"
	Jul 01 12:21:57 ha-735960-m02 systemd[1]: Stopping Docker Application Container Engine...
	Jul 01 12:21:57 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:57.317321834Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 01 12:21:57 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:57.317386191Z" level=info msg="Daemon shutdown complete"
	Jul 01 12:21:57 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:57.317447382Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 01 12:21:57 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:57.317464543Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 01 12:21:58 ha-735960-m02 systemd[1]: docker.service: Deactivated successfully.
	Jul 01 12:21:58 ha-735960-m02 systemd[1]: Stopped Docker Application Container Engine.
	Jul 01 12:21:58 ha-735960-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 01 12:21:58 ha-735960-m02 dockerd[1188]: time="2024-07-01T12:21:58.364754006Z" level=info msg="Starting up"
	Jul 01 12:22:58 ha-735960-m02 dockerd[1188]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 01 12:22:58 ha-735960-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 01 12:22:58 ha-735960-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 01 12:22:58 ha-735960-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0701 12:22:58.445741  652196 out.go:239] * 
	* 
	W0701 12:22:58.447325  652196 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 12:22:58.449434  652196 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-linux-amd64 node list -p ha-735960 -v=7 --alsologtostderr" : exit status 90
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-735960
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-735960 -n ha-735960
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-735960 -n ha-735960: exit status 2 (231.983714ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 logs -n 25
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-735960 cp ha-735960-m03:/home/docker/cp-test.txt                              | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m02:/home/docker/cp-test_ha-735960-m03_ha-735960-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n                                                                 | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n ha-735960-m02 sudo cat                                          | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | /home/docker/cp-test_ha-735960-m03_ha-735960-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-735960 cp ha-735960-m03:/home/docker/cp-test.txt                              | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m04:/home/docker/cp-test_ha-735960-m03_ha-735960-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n                                                                 | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n ha-735960-m04 sudo cat                                          | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | /home/docker/cp-test_ha-735960-m03_ha-735960-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-735960 cp testdata/cp-test.txt                                                | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n                                                                 | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-735960 cp ha-735960-m04:/home/docker/cp-test.txt                              | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2826819896/001/cp-test_ha-735960-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n                                                                 | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-735960 cp ha-735960-m04:/home/docker/cp-test.txt                              | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960:/home/docker/cp-test_ha-735960-m04_ha-735960.txt                       |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n                                                                 | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n ha-735960 sudo cat                                              | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | /home/docker/cp-test_ha-735960-m04_ha-735960.txt                                 |           |         |         |                     |                     |
	| cp      | ha-735960 cp ha-735960-m04:/home/docker/cp-test.txt                              | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m02:/home/docker/cp-test_ha-735960-m04_ha-735960-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n                                                                 | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n ha-735960-m02 sudo cat                                          | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | /home/docker/cp-test_ha-735960-m04_ha-735960-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-735960 cp ha-735960-m04:/home/docker/cp-test.txt                              | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m03:/home/docker/cp-test_ha-735960-m04_ha-735960-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n                                                                 | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n ha-735960-m03 sudo cat                                          | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | /home/docker/cp-test_ha-735960-m04_ha-735960-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-735960 node stop m02 -v=7                                                     | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-735960 node start m02 -v=7                                                    | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:20 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-735960 -v=7                                                           | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:20 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-735960 -v=7                                                                | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:20 UTC | 01 Jul 24 12:21 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-735960 --wait=true -v=7                                                    | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:21 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-735960                                                                | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:22 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/01 12:21:13
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0701 12:21:13.996326  652196 out.go:291] Setting OutFile to fd 1 ...
	I0701 12:21:13.996600  652196 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 12:21:13.996610  652196 out.go:304] Setting ErrFile to fd 2...
	I0701 12:21:13.996615  652196 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 12:21:13.996825  652196 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19166-630650/.minikube/bin
	I0701 12:21:13.997417  652196 out.go:298] Setting JSON to false
	I0701 12:21:13.998463  652196 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":7412,"bootTime":1719829062,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0701 12:21:13.998525  652196 start.go:139] virtualization: kvm guest
	I0701 12:21:14.000967  652196 out.go:177] * [ha-735960] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0701 12:21:14.002666  652196 out.go:177]   - MINIKUBE_LOCATION=19166
	I0701 12:21:14.002690  652196 notify.go:220] Checking for updates...
	I0701 12:21:14.005489  652196 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 12:21:14.006983  652196 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19166-630650/kubeconfig
	I0701 12:21:14.008350  652196 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19166-630650/.minikube
	I0701 12:21:14.009593  652196 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0701 12:21:14.011091  652196 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 12:21:14.012857  652196 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:21:14.012999  652196 driver.go:392] Setting default libvirt URI to qemu:///system
	I0701 12:21:14.013468  652196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:21:14.013542  652196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:21:14.028581  652196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35775
	I0701 12:21:14.028967  652196 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:21:14.029528  652196 main.go:141] libmachine: Using API Version  1
	I0701 12:21:14.029551  652196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:21:14.029916  652196 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:21:14.030116  652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:21:14.065038  652196 out.go:177] * Using the kvm2 driver based on existing profile
	I0701 12:21:14.066535  652196 start.go:297] selected driver: kvm2
	I0701 12:21:14.066551  652196 start.go:901] validating driver "kvm2" against &{Name:ha-735960 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.2 ClusterName:ha-735960 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.60 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:fal
se efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 12:21:14.066723  652196 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 12:21:14.067041  652196 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:21:14.067114  652196 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19166-630650/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0701 12:21:14.082191  652196 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0701 12:21:14.082920  652196 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 12:21:14.082959  652196 cni.go:84] Creating CNI manager for ""
	I0701 12:21:14.082966  652196 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0701 12:21:14.083026  652196 start.go:340] cluster config:
	{Name:ha-735960 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-735960 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.60 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false h
elm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 12:21:14.083142  652196 iso.go:125] acquiring lock: {Name:mk5c70910f61bc270c83609c48670eaf9d7e0602 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:21:14.086358  652196 out.go:177] * Starting "ha-735960" primary control-plane node in "ha-735960" cluster
	I0701 12:21:14.087757  652196 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 12:21:14.087794  652196 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19166-630650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0701 12:21:14.087805  652196 cache.go:56] Caching tarball of preloaded images
	I0701 12:21:14.087882  652196 preload.go:173] Found /home/jenkins/minikube-integration/19166-630650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0701 12:21:14.087892  652196 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0701 12:21:14.088044  652196 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
	I0701 12:21:14.088232  652196 start.go:360] acquireMachinesLock for ha-735960: {Name:mk43a6c0c0c15a237623df377ad65b82186c0ad8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:21:14.088271  652196 start.go:364] duration metric: took 21.615µs to acquireMachinesLock for "ha-735960"
	I0701 12:21:14.088285  652196 start.go:96] Skipping create...Using existing machine configuration
	I0701 12:21:14.088293  652196 fix.go:54] fixHost starting: 
	I0701 12:21:14.088547  652196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:21:14.088578  652196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:21:14.103070  652196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42935
	I0701 12:21:14.103508  652196 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:21:14.104025  652196 main.go:141] libmachine: Using API Version  1
	I0701 12:21:14.104050  652196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:21:14.104424  652196 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:21:14.104649  652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:21:14.104829  652196 main.go:141] libmachine: (ha-735960) Calling .GetState
	I0701 12:21:14.106608  652196 fix.go:112] recreateIfNeeded on ha-735960: state=Stopped err=<nil>
	I0701 12:21:14.106630  652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
	W0701 12:21:14.106790  652196 fix.go:138] unexpected machine state, will restart: <nil>
	I0701 12:21:14.108833  652196 out.go:177] * Restarting existing kvm2 VM for "ha-735960" ...
	I0701 12:21:14.110060  652196 main.go:141] libmachine: (ha-735960) Calling .Start
	I0701 12:21:14.110234  652196 main.go:141] libmachine: (ha-735960) Ensuring networks are active...
	I0701 12:21:14.110976  652196 main.go:141] libmachine: (ha-735960) Ensuring network default is active
	I0701 12:21:14.111299  652196 main.go:141] libmachine: (ha-735960) Ensuring network mk-ha-735960 is active
	I0701 12:21:14.111665  652196 main.go:141] libmachine: (ha-735960) Getting domain xml...
	I0701 12:21:14.112420  652196 main.go:141] libmachine: (ha-735960) Creating domain...
	I0701 12:21:15.307133  652196 main.go:141] libmachine: (ha-735960) Waiting to get IP...
	I0701 12:21:15.308062  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:15.308526  652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:21:15.308647  652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:15.308493  652224 retry.go:31] will retry after 239.111405ms: waiting for machine to come up
	I0701 12:21:15.549211  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:15.549648  652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:21:15.549679  652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:15.549597  652224 retry.go:31] will retry after 248.256131ms: waiting for machine to come up
	I0701 12:21:15.799054  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:15.799481  652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:21:15.799534  652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:15.799422  652224 retry.go:31] will retry after 380.468685ms: waiting for machine to come up
	I0701 12:21:16.181969  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:16.182432  652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:21:16.182634  652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:16.182540  652224 retry.go:31] will retry after 592.847587ms: waiting for machine to come up
	I0701 12:21:16.777393  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:16.777837  652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:21:16.777867  652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:16.777790  652224 retry.go:31] will retry after 639.749416ms: waiting for machine to come up
	I0701 12:21:17.419540  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:17.419941  652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:21:17.419965  652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:17.419916  652224 retry.go:31] will retry after 891.768613ms: waiting for machine to come up
	I0701 12:21:18.312967  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:18.313455  652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:21:18.313484  652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:18.313399  652224 retry.go:31] will retry after 1.112048412s: waiting for machine to come up
	I0701 12:21:19.427190  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:19.427624  652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:21:19.427655  652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:19.427568  652224 retry.go:31] will retry after 1.150138437s: waiting for machine to come up
	I0701 12:21:20.579868  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:20.580291  652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:21:20.580325  652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:20.580216  652224 retry.go:31] will retry after 1.129763596s: waiting for machine to come up
	I0701 12:21:21.711416  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:21.711892  652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:21:21.711924  652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:21.711831  652224 retry.go:31] will retry after 2.143074349s: waiting for machine to come up
	I0701 12:21:23.858081  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:23.858617  652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:21:23.858643  652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:23.858578  652224 retry.go:31] will retry after 2.436757856s: waiting for machine to come up
	I0701 12:21:26.297727  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:26.298302  652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:21:26.298352  652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:26.298269  652224 retry.go:31] will retry after 2.609229165s: waiting for machine to come up
	I0701 12:21:28.911224  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:28.911698  652196 main.go:141] libmachine: (ha-735960) Found IP for machine: 192.168.39.16
	I0701 12:21:28.911722  652196 main.go:141] libmachine: (ha-735960) Reserving static IP address...
	I0701 12:21:28.911731  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has current primary IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:28.912401  652196 main.go:141] libmachine: (ha-735960) Reserved static IP address: 192.168.39.16
	I0701 12:21:28.912425  652196 main.go:141] libmachine: (ha-735960) Waiting for SSH to be available...
	I0701 12:21:28.912468  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "ha-735960", mac: "52:54:00:6c:20:7c", ip: "192.168.39.16"} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:28.912492  652196 main.go:141] libmachine: (ha-735960) DBG | skip adding static IP to network mk-ha-735960 - found existing host DHCP lease matching {name: "ha-735960", mac: "52:54:00:6c:20:7c", ip: "192.168.39.16"}
	I0701 12:21:28.912507  652196 main.go:141] libmachine: (ha-735960) DBG | Getting to WaitForSSH function...
	I0701 12:21:28.914934  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:28.915448  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:28.915478  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:28.915627  652196 main.go:141] libmachine: (ha-735960) DBG | Using SSH client type: external
	I0701 12:21:28.915655  652196 main.go:141] libmachine: (ha-735960) DBG | Using SSH private key: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa (-rw-------)
	I0701 12:21:28.915680  652196 main.go:141] libmachine: (ha-735960) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.16 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0701 12:21:28.915698  652196 main.go:141] libmachine: (ha-735960) DBG | About to run SSH command:
	I0701 12:21:28.915730  652196 main.go:141] libmachine: (ha-735960) DBG | exit 0
	I0701 12:21:29.042314  652196 main.go:141] libmachine: (ha-735960) DBG | SSH cmd err, output: <nil>: 
	I0701 12:21:29.042747  652196 main.go:141] libmachine: (ha-735960) Calling .GetConfigRaw
	I0701 12:21:29.043414  652196 main.go:141] libmachine: (ha-735960) Calling .GetIP
	I0701 12:21:29.046291  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.046689  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:29.046714  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.046967  652196 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
	I0701 12:21:29.047187  652196 machine.go:94] provisionDockerMachine start ...
	I0701 12:21:29.047211  652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:21:29.047467  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:21:29.049524  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.049899  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:29.049924  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.050040  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:21:29.050240  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:29.050477  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:29.050669  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:21:29.050868  652196 main.go:141] libmachine: Using SSH client type: native
	I0701 12:21:29.051073  652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0701 12:21:29.051086  652196 main.go:141] libmachine: About to run SSH command:
	hostname
	I0701 12:21:29.166645  652196 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0701 12:21:29.166687  652196 main.go:141] libmachine: (ha-735960) Calling .GetMachineName
	I0701 12:21:29.166983  652196 buildroot.go:166] provisioning hostname "ha-735960"
	I0701 12:21:29.167013  652196 main.go:141] libmachine: (ha-735960) Calling .GetMachineName
	I0701 12:21:29.167232  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:21:29.169829  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.170228  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:29.170260  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.170403  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:21:29.170603  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:29.170773  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:29.170913  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:21:29.171082  652196 main.go:141] libmachine: Using SSH client type: native
	I0701 12:21:29.171259  652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0701 12:21:29.171270  652196 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-735960 && echo "ha-735960" | sudo tee /etc/hostname
	I0701 12:21:29.295697  652196 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-735960
	
	I0701 12:21:29.295728  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:21:29.298625  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.299014  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:29.299041  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.299233  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:21:29.299434  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:29.299641  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:29.299795  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:21:29.299954  652196 main.go:141] libmachine: Using SSH client type: native
	I0701 12:21:29.300149  652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0701 12:21:29.300171  652196 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-735960' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-735960/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-735960' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0701 12:21:29.418489  652196 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0701 12:21:29.418522  652196 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19166-630650/.minikube CaCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19166-630650/.minikube}
	I0701 12:21:29.418577  652196 buildroot.go:174] setting up certificates
	I0701 12:21:29.418593  652196 provision.go:84] configureAuth start
	I0701 12:21:29.418612  652196 main.go:141] libmachine: (ha-735960) Calling .GetMachineName
	I0701 12:21:29.418889  652196 main.go:141] libmachine: (ha-735960) Calling .GetIP
	I0701 12:21:29.421815  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.422238  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:29.422275  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.422477  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:21:29.424787  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.425187  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:29.425216  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.425427  652196 provision.go:143] copyHostCerts
	I0701 12:21:29.425466  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem
	I0701 12:21:29.425530  652196 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem, removing ...
	I0701 12:21:29.425542  652196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem
	I0701 12:21:29.425624  652196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem (1078 bytes)
	I0701 12:21:29.425732  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem
	I0701 12:21:29.425753  652196 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem, removing ...
	I0701 12:21:29.425758  652196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem
	I0701 12:21:29.425798  652196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem (1123 bytes)
	I0701 12:21:29.425856  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem
	I0701 12:21:29.425872  652196 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem, removing ...
	I0701 12:21:29.425877  652196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem
	I0701 12:21:29.425897  652196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem (1675 bytes)
	I0701 12:21:29.425958  652196 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem org=jenkins.ha-735960 san=[127.0.0.1 192.168.39.16 ha-735960 localhost minikube]
	I0701 12:21:29.592360  652196 provision.go:177] copyRemoteCerts
	I0701 12:21:29.592437  652196 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0701 12:21:29.592463  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:21:29.595489  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.595884  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:29.595908  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.596131  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:21:29.596356  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:29.596515  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:21:29.596646  652196 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa Username:docker}
	I0701 12:21:29.684124  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0701 12:21:29.684214  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0701 12:21:29.707185  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0701 12:21:29.707254  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0701 12:21:29.729605  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0701 12:21:29.729687  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0701 12:21:29.751505  652196 provision.go:87] duration metric: took 332.894756ms to configureAuth
	I0701 12:21:29.751536  652196 buildroot.go:189] setting minikube options for container-runtime
	I0701 12:21:29.751802  652196 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:21:29.751834  652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:21:29.752179  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:21:29.754903  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.755331  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:29.755367  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.755494  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:21:29.755709  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:29.755868  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:29.756016  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:21:29.756168  652196 main.go:141] libmachine: Using SSH client type: native
	I0701 12:21:29.756341  652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0701 12:21:29.756351  652196 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0701 12:21:29.867557  652196 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0701 12:21:29.867582  652196 buildroot.go:70] root file system type: tmpfs
	I0701 12:21:29.867738  652196 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0701 12:21:29.867768  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:21:29.870702  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.871111  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:29.871152  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.871294  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:21:29.871532  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:29.871806  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:29.871989  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:21:29.872177  652196 main.go:141] libmachine: Using SSH client type: native
	I0701 12:21:29.872347  652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0701 12:21:29.872410  652196 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0701 12:21:29.995623  652196 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0701 12:21:29.995671  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:21:29.998574  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.998969  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:29.999001  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.999184  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:21:29.999403  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:29.999598  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:29.999772  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:21:29.999916  652196 main.go:141] libmachine: Using SSH client type: native
	I0701 12:21:30.000093  652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0701 12:21:30.000109  652196 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0701 12:21:31.849411  652196 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0701 12:21:31.849452  652196 machine.go:97] duration metric: took 2.802248138s to provisionDockerMachine
	I0701 12:21:31.849473  652196 start.go:293] postStartSetup for "ha-735960" (driver="kvm2")
	I0701 12:21:31.849487  652196 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0701 12:21:31.849508  652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:21:31.849934  652196 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0701 12:21:31.849982  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:21:31.853029  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:31.853464  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:31.853494  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:31.853656  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:21:31.853877  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:31.854065  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:21:31.854242  652196 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa Username:docker}
	I0701 12:21:31.948096  652196 ssh_runner.go:195] Run: cat /etc/os-release
	I0701 12:21:31.952493  652196 info.go:137] Remote host: Buildroot 2023.02.9
	I0701 12:21:31.952522  652196 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-630650/.minikube/addons for local assets ...
	I0701 12:21:31.952580  652196 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-630650/.minikube/files for local assets ...
	I0701 12:21:31.952654  652196 filesync.go:149] local asset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> 6378542.pem in /etc/ssl/certs
	I0701 12:21:31.952664  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> /etc/ssl/certs/6378542.pem
	I0701 12:21:31.952750  652196 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0701 12:21:31.962034  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem --> /etc/ssl/certs/6378542.pem (1708 bytes)
	I0701 12:21:31.985898  652196 start.go:296] duration metric: took 136.407484ms for postStartSetup
	I0701 12:21:31.985953  652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:21:31.986287  652196 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0701 12:21:31.986316  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:21:31.988934  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:31.989328  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:31.989359  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:31.989497  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:21:31.989724  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:31.989863  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:21:31.990038  652196 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa Username:docker}
	I0701 12:21:32.076710  652196 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0701 12:21:32.076807  652196 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0701 12:21:32.133792  652196 fix.go:56] duration metric: took 18.045488816s for fixHost
	I0701 12:21:32.133863  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:21:32.136703  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:32.137078  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:32.137110  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:32.137321  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:21:32.137591  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:32.137793  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:32.137963  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:21:32.138201  652196 main.go:141] libmachine: Using SSH client type: native
	I0701 12:21:32.138518  652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0701 12:21:32.138541  652196 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0701 12:21:32.254973  652196 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719836492.215186729
	
	I0701 12:21:32.255001  652196 fix.go:216] guest clock: 1719836492.215186729
	I0701 12:21:32.255007  652196 fix.go:229] Guest: 2024-07-01 12:21:32.215186729 +0000 UTC Remote: 2024-07-01 12:21:32.133836118 +0000 UTC m=+18.172225533 (delta=81.350611ms)
	I0701 12:21:32.255027  652196 fix.go:200] guest clock delta is within tolerance: 81.350611ms
	I0701 12:21:32.255032  652196 start.go:83] releasing machines lock for "ha-735960", held for 18.166751927s
	I0701 12:21:32.255050  652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:21:32.255338  652196 main.go:141] libmachine: (ha-735960) Calling .GetIP
	I0701 12:21:32.258091  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:32.258459  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:32.258481  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:32.258679  652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:21:32.259224  652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:21:32.259383  652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:21:32.259520  652196 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0701 12:21:32.259564  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:21:32.259693  652196 ssh_runner.go:195] Run: cat /version.json
	I0701 12:21:32.259718  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:21:32.262127  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:32.262481  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:32.262518  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:32.262538  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:32.262653  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:21:32.262845  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:32.263031  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:21:32.263054  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:32.263074  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:32.263215  652196 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa Username:docker}
	I0701 12:21:32.263229  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:21:32.263398  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:32.263547  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:21:32.263699  652196 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa Username:docker}
	I0701 12:21:32.343012  652196 ssh_runner.go:195] Run: systemctl --version
	I0701 12:21:32.428409  652196 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0701 12:21:32.433742  652196 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0701 12:21:32.433815  652196 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0701 12:21:32.449052  652196 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0701 12:21:32.449087  652196 start.go:494] detecting cgroup driver to use...
	I0701 12:21:32.449338  652196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 12:21:32.471651  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0701 12:21:32.481832  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0701 12:21:32.491470  652196 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0701 12:21:32.491548  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0701 12:21:32.501229  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0701 12:21:32.511119  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0701 12:21:32.520826  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0701 12:21:32.530559  652196 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0701 12:21:32.542109  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0701 12:21:32.551821  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0701 12:21:32.561403  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0701 12:21:32.571068  652196 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0701 12:21:32.579813  652196 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0701 12:21:32.588595  652196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:21:32.705377  652196 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0701 12:21:32.724169  652196 start.go:494] detecting cgroup driver to use...
	I0701 12:21:32.724285  652196 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0701 12:21:32.739050  652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 12:21:32.753169  652196 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0701 12:21:32.769805  652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 12:21:32.783750  652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 12:21:32.797509  652196 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0701 12:21:32.821510  652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 12:21:32.835901  652196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 12:21:32.854192  652196 ssh_runner.go:195] Run: which cri-dockerd
	I0701 12:21:32.858039  652196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0701 12:21:32.867652  652196 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0701 12:21:32.884216  652196 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0701 12:21:33.001636  652196 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0701 12:21:33.121229  652196 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0701 12:21:33.121419  652196 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0701 12:21:33.138482  652196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:21:33.262395  652196 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0701 12:21:35.714549  652196 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.452099351s)
	I0701 12:21:35.714642  652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0701 12:21:35.727946  652196 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0701 12:21:35.744089  652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0701 12:21:35.757426  652196 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0701 12:21:35.868089  652196 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0701 12:21:35.989857  652196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:21:36.121343  652196 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0701 12:21:36.138520  652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0701 12:21:36.152026  652196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:21:36.271312  652196 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0701 12:21:36.351567  652196 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0701 12:21:36.351668  652196 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0701 12:21:36.357143  652196 start.go:562] Will wait 60s for crictl version
	I0701 12:21:36.357212  652196 ssh_runner.go:195] Run: which crictl
	I0701 12:21:36.361384  652196 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0701 12:21:36.400372  652196 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.1
	RuntimeApiVersion:  v1
	I0701 12:21:36.400446  652196 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0701 12:21:36.427941  652196 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0701 12:21:36.456620  652196 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.1 ...
	I0701 12:21:36.456687  652196 main.go:141] libmachine: (ha-735960) Calling .GetIP
	I0701 12:21:36.459384  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:36.459752  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:36.459781  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:36.459970  652196 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0701 12:21:36.463956  652196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 12:21:36.476676  652196 kubeadm.go:877] updating cluster {Name:ha-735960 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cl
usterName:ha-735960 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.60 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:fa
lse freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0701 12:21:36.476851  652196 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 12:21:36.476914  652196 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0701 12:21:36.493466  652196 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	kindest/kindnetd:v20240513-cd2ac642
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0701 12:21:36.493530  652196 docker.go:615] Images already preloaded, skipping extraction
	I0701 12:21:36.493620  652196 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0701 12:21:36.510908  652196 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	kindest/kindnetd:v20240513-cd2ac642
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0701 12:21:36.510939  652196 cache_images.go:84] Images are preloaded, skipping loading
	I0701 12:21:36.510952  652196 kubeadm.go:928] updating node { 192.168.39.16 8443 v1.30.2 docker true true} ...
	I0701 12:21:36.511079  652196 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-735960 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.16
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-735960 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0701 12:21:36.511139  652196 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0701 12:21:36.536408  652196 cni.go:84] Creating CNI manager for ""
	I0701 12:21:36.536430  652196 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0701 12:21:36.536441  652196 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0701 12:21:36.536470  652196 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.16 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-735960 NodeName:ha-735960 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.16"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.16 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0701 12:21:36.536633  652196 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.16
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-735960"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.16
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.16"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0701 12:21:36.536656  652196 kube-vip.go:115] generating kube-vip config ...
	I0701 12:21:36.536698  652196 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0701 12:21:36.551906  652196 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0701 12:21:36.552024  652196 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0701 12:21:36.552078  652196 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0701 12:21:36.561989  652196 binaries.go:44] Found k8s binaries, skipping transfer
	I0701 12:21:36.562059  652196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0701 12:21:36.571281  652196 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0701 12:21:36.587480  652196 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0701 12:21:36.603596  652196 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0701 12:21:36.621063  652196 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0701 12:21:36.637192  652196 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0701 12:21:36.640909  652196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 12:21:36.652690  652196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:21:36.768142  652196 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0701 12:21:36.786625  652196 certs.go:68] Setting up /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960 for IP: 192.168.39.16
	I0701 12:21:36.786655  652196 certs.go:194] generating shared ca certs ...
	I0701 12:21:36.786676  652196 certs.go:226] acquiring lock for ca certs: {Name:mk34e166bfd069e523b2325e14d1812c523bff53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:21:36.786854  652196 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key
	I0701 12:21:36.786904  652196 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key
	I0701 12:21:36.786915  652196 certs.go:256] generating profile certs ...
	I0701 12:21:36.787017  652196 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/client.key
	I0701 12:21:36.787046  652196 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key.5c21f4af
	I0701 12:21:36.787059  652196 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt.5c21f4af with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.16 192.168.39.86 192.168.39.97 192.168.39.254]
	I0701 12:21:37.059263  652196 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt.5c21f4af ...
	I0701 12:21:37.059305  652196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt.5c21f4af: {Name:mk1be9dc4667506ac6fdcfb1e313edd1292fe7f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:21:37.059483  652196 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key.5c21f4af ...
	I0701 12:21:37.059496  652196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key.5c21f4af: {Name:mkf9220e489bd04f035dab270c790bb3448ca6be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:21:37.059596  652196 certs.go:381] copying /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt.5c21f4af -> /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt
	I0701 12:21:37.059809  652196 certs.go:385] copying /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key.5c21f4af -> /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key
	I0701 12:21:37.059969  652196 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.key
	I0701 12:21:37.059987  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0701 12:21:37.060000  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0701 12:21:37.060014  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0701 12:21:37.060026  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0701 12:21:37.060038  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0701 12:21:37.060054  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0701 12:21:37.060066  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0701 12:21:37.060077  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0701 12:21:37.060165  652196 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem (1338 bytes)
	W0701 12:21:37.060197  652196 certs.go:480] ignoring /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854_empty.pem, impossibly tiny 0 bytes
	I0701 12:21:37.060207  652196 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem (1675 bytes)
	I0701 12:21:37.060228  652196 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem (1078 bytes)
	I0701 12:21:37.060248  652196 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem (1123 bytes)
	I0701 12:21:37.060270  652196 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem (1675 bytes)
	I0701 12:21:37.060305  652196 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem (1708 bytes)
	I0701 12:21:37.060331  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem -> /usr/share/ca-certificates/637854.pem
	I0701 12:21:37.060347  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> /usr/share/ca-certificates/6378542.pem
	I0701 12:21:37.060359  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:21:37.061045  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0701 12:21:37.111708  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0701 12:21:37.168649  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0701 12:21:37.204675  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0701 12:21:37.241167  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0701 12:21:37.265225  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0701 12:21:37.288613  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0701 12:21:37.312645  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0701 12:21:37.337494  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem --> /usr/share/ca-certificates/637854.pem (1338 bytes)
	I0701 12:21:37.361044  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem --> /usr/share/ca-certificates/6378542.pem (1708 bytes)
	I0701 12:21:37.385424  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0701 12:21:37.409054  652196 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0701 12:21:37.426602  652196 ssh_runner.go:195] Run: openssl version
	I0701 12:21:37.432129  652196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0701 12:21:37.442695  652196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:21:37.447331  652196 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  1 12:05 /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:21:37.447415  652196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:21:37.453215  652196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0701 12:21:37.464086  652196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/637854.pem && ln -fs /usr/share/ca-certificates/637854.pem /etc/ssl/certs/637854.pem"
	I0701 12:21:37.474527  652196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/637854.pem
	I0701 12:21:37.479057  652196 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  1 12:11 /usr/share/ca-certificates/637854.pem
	I0701 12:21:37.479123  652196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/637854.pem
	I0701 12:21:37.484641  652196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/637854.pem /etc/ssl/certs/51391683.0"
	I0701 12:21:37.495175  652196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6378542.pem && ln -fs /usr/share/ca-certificates/6378542.pem /etc/ssl/certs/6378542.pem"
	I0701 12:21:37.505961  652196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6378542.pem
	I0701 12:21:37.510286  652196 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  1 12:11 /usr/share/ca-certificates/6378542.pem
	I0701 12:21:37.510365  652196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6378542.pem
	I0701 12:21:37.516124  652196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6378542.pem /etc/ssl/certs/3ec20f2e.0"
	I0701 12:21:37.527154  652196 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0701 12:21:37.532024  652196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0701 12:21:37.538145  652196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0701 12:21:37.544280  652196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0701 12:21:37.550448  652196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0701 12:21:37.556356  652196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0701 12:21:37.562174  652196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0701 12:21:37.568144  652196 kubeadm.go:391] StartCluster: {Name:ha-735960 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-735960 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.60 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false
freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 12:21:37.568362  652196 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0701 12:21:37.586457  652196 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0701 12:21:37.596129  652196 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0701 12:21:37.596158  652196 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0701 12:21:37.596164  652196 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0701 12:21:37.596237  652196 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0701 12:21:37.605715  652196 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0701 12:21:37.606193  652196 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-735960" does not appear in /home/jenkins/minikube-integration/19166-630650/kubeconfig
	I0701 12:21:37.606354  652196 kubeconfig.go:62] /home/jenkins/minikube-integration/19166-630650/kubeconfig needs updating (will repair): [kubeconfig missing "ha-735960" cluster setting kubeconfig missing "ha-735960" context setting]
	I0701 12:21:37.606708  652196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19166-630650/kubeconfig: {Name:mke3ef9d019eff4edd273b00c416fd77ed009242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:21:37.607135  652196 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19166-630650/kubeconfig
	I0701 12:21:37.607365  652196 kapi.go:59] client config for ha-735960: &rest.Config{Host:"https://192.168.39.16:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/client.crt", KeyFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/client.key", CAFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(ni
l)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfbb80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0701 12:21:37.607752  652196 cert_rotation.go:137] Starting client certificate rotation controller
	I0701 12:21:37.608047  652196 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0701 12:21:37.617685  652196 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.16
	I0701 12:21:37.617715  652196 kubeadm.go:591] duration metric: took 21.544408ms to restartPrimaryControlPlane
	I0701 12:21:37.617725  652196 kubeadm.go:393] duration metric: took 49.593354ms to StartCluster
	I0701 12:21:37.617748  652196 settings.go:142] acquiring lock: {Name:mk6f7c85ea77a73ff0ac851454721f2e6e309153 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:21:37.617834  652196 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19166-630650/kubeconfig
	I0701 12:21:37.618535  652196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19166-630650/kubeconfig: {Name:mke3ef9d019eff4edd273b00c416fd77ed009242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:21:37.618754  652196 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:21:37.618777  652196 start.go:240] waiting for startup goroutines ...
	I0701 12:21:37.618792  652196 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0701 12:21:37.619028  652196 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:21:37.621683  652196 out.go:177] * Enabled addons: 
	I0701 12:21:37.622979  652196 addons.go:510] duration metric: took 4.192015ms for enable addons: enabled=[]
	I0701 12:21:37.623011  652196 start.go:245] waiting for cluster config update ...
	I0701 12:21:37.623019  652196 start.go:254] writing updated cluster config ...
	I0701 12:21:37.624600  652196 out.go:177] 
	I0701 12:21:37.626023  652196 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:21:37.626124  652196 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
	I0701 12:21:37.627745  652196 out.go:177] * Starting "ha-735960-m02" control-plane node in "ha-735960" cluster
	I0701 12:21:37.628946  652196 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 12:21:37.628969  652196 cache.go:56] Caching tarball of preloaded images
	I0701 12:21:37.629060  652196 preload.go:173] Found /home/jenkins/minikube-integration/19166-630650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0701 12:21:37.629072  652196 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0701 12:21:37.629161  652196 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
	I0701 12:21:37.629353  652196 start.go:360] acquireMachinesLock for ha-735960-m02: {Name:mk43a6c0c0c15a237623df377ad65b82186c0ad8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:21:37.629411  652196 start.go:364] duration metric: took 31.79µs to acquireMachinesLock for "ha-735960-m02"
	I0701 12:21:37.629427  652196 start.go:96] Skipping create...Using existing machine configuration
	I0701 12:21:37.629440  652196 fix.go:54] fixHost starting: m02
	I0701 12:21:37.629698  652196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:21:37.629747  652196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:21:37.644981  652196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41663
	I0701 12:21:37.645473  652196 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:21:37.645947  652196 main.go:141] libmachine: Using API Version  1
	I0701 12:21:37.645969  652196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:21:37.646284  652196 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:21:37.646523  652196 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:21:37.646646  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetState
	I0701 12:21:37.648195  652196 fix.go:112] recreateIfNeeded on ha-735960-m02: state=Stopped err=<nil>
	I0701 12:21:37.648228  652196 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	W0701 12:21:37.648406  652196 fix.go:138] unexpected machine state, will restart: <nil>
	I0701 12:21:37.650489  652196 out.go:177] * Restarting existing kvm2 VM for "ha-735960-m02" ...
	I0701 12:21:37.651975  652196 main.go:141] libmachine: (ha-735960-m02) Calling .Start
	I0701 12:21:37.652186  652196 main.go:141] libmachine: (ha-735960-m02) Ensuring networks are active...
	I0701 12:21:37.652916  652196 main.go:141] libmachine: (ha-735960-m02) Ensuring network default is active
	I0701 12:21:37.653282  652196 main.go:141] libmachine: (ha-735960-m02) Ensuring network mk-ha-735960 is active
	I0701 12:21:37.653613  652196 main.go:141] libmachine: (ha-735960-m02) Getting domain xml...
	I0701 12:21:37.654254  652196 main.go:141] libmachine: (ha-735960-m02) Creating domain...
	I0701 12:21:38.852369  652196 main.go:141] libmachine: (ha-735960-m02) Waiting to get IP...
	I0701 12:21:38.853358  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:38.853762  652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:21:38.853832  652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:38.853747  652384 retry.go:31] will retry after 295.798088ms: waiting for machine to come up
	I0701 12:21:39.151332  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:39.151886  652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:21:39.151912  652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:39.151845  652384 retry.go:31] will retry after 255.18729ms: waiting for machine to come up
	I0701 12:21:39.408310  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:39.408739  652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:21:39.408792  652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:39.408689  652384 retry.go:31] will retry after 457.740061ms: waiting for machine to come up
	I0701 12:21:39.868295  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:39.868702  652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:21:39.868736  652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:39.868629  652384 retry.go:31] will retry after 548.674851ms: waiting for machine to come up
	I0701 12:21:40.419597  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:40.420069  652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:21:40.420100  652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:40.420009  652384 retry.go:31] will retry after 755.113146ms: waiting for machine to come up
	I0701 12:21:41.176960  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:41.177380  652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:21:41.177429  652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:41.177309  652384 retry.go:31] will retry after 739.288718ms: waiting for machine to come up
	I0701 12:21:41.918305  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:41.918853  652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:21:41.918884  652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:41.918789  652384 retry.go:31] will retry after 722.041404ms: waiting for machine to come up
	I0701 12:21:42.642704  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:42.643188  652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:21:42.643219  652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:42.643113  652384 retry.go:31] will retry after 1.139279839s: waiting for machine to come up
	I0701 12:21:43.784719  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:43.785159  652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:21:43.785193  652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:43.785114  652384 retry.go:31] will retry after 1.276779849s: waiting for machine to come up
	I0701 12:21:45.063522  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:45.064026  652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:21:45.064058  652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:45.063969  652384 retry.go:31] will retry after 2.284492799s: waiting for machine to come up
	I0701 12:21:47.351530  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:47.352076  652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:21:47.352113  652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:47.351988  652384 retry.go:31] will retry after 2.171521184s: waiting for machine to come up
	I0701 12:21:49.526162  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:49.526566  652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:21:49.526590  652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:49.526523  652384 retry.go:31] will retry after 3.533181759s: waiting for machine to come up
	I0701 12:21:53.061482  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.062025  652196 main.go:141] libmachine: (ha-735960-m02) Found IP for machine: 192.168.39.86
	I0701 12:21:53.062048  652196 main.go:141] libmachine: (ha-735960-m02) Reserving static IP address...
	I0701 12:21:53.062060  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has current primary IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.062473  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "ha-735960-m02", mac: "52:54:00:0b:2f:ce", ip: "192.168.39.86"} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:53.062504  652196 main.go:141] libmachine: (ha-735960-m02) DBG | skip adding static IP to network mk-ha-735960 - found existing host DHCP lease matching {name: "ha-735960-m02", mac: "52:54:00:0b:2f:ce", ip: "192.168.39.86"}
	I0701 12:21:53.062534  652196 main.go:141] libmachine: (ha-735960-m02) Reserved static IP address: 192.168.39.86
	I0701 12:21:53.062554  652196 main.go:141] libmachine: (ha-735960-m02) Waiting for SSH to be available...
	I0701 12:21:53.062566  652196 main.go:141] libmachine: (ha-735960-m02) DBG | Getting to WaitForSSH function...
	I0701 12:21:53.064461  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.064796  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:53.064828  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.064893  652196 main.go:141] libmachine: (ha-735960-m02) DBG | Using SSH client type: external
	I0701 12:21:53.064938  652196 main.go:141] libmachine: (ha-735960-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa (-rw-------)
	I0701 12:21:53.064965  652196 main.go:141] libmachine: (ha-735960-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.86 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0701 12:21:53.064981  652196 main.go:141] libmachine: (ha-735960-m02) DBG | About to run SSH command:
	I0701 12:21:53.065000  652196 main.go:141] libmachine: (ha-735960-m02) DBG | exit 0
	I0701 12:21:53.190266  652196 main.go:141] libmachine: (ha-735960-m02) DBG | SSH cmd err, output: <nil>: 
	I0701 12:21:53.190636  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetConfigRaw
	I0701 12:21:53.191272  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetIP
	I0701 12:21:53.193658  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.193994  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:53.194027  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.194274  652196 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
	I0701 12:21:53.194544  652196 machine.go:94] provisionDockerMachine start ...
	I0701 12:21:53.194562  652196 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:21:53.194814  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:21:53.196894  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.197262  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:53.197291  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.197414  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:21:53.197654  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:53.197829  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:53.198021  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:21:53.198185  652196 main.go:141] libmachine: Using SSH client type: native
	I0701 12:21:53.198432  652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0701 12:21:53.198448  652196 main.go:141] libmachine: About to run SSH command:
	hostname
	I0701 12:21:53.306480  652196 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0701 12:21:53.306526  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetMachineName
	I0701 12:21:53.306839  652196 buildroot.go:166] provisioning hostname "ha-735960-m02"
	I0701 12:21:53.306870  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetMachineName
	I0701 12:21:53.307063  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:21:53.309645  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.310086  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:53.310116  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.310307  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:21:53.310514  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:53.310689  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:53.310820  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:21:53.310997  652196 main.go:141] libmachine: Using SSH client type: native
	I0701 12:21:53.311210  652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0701 12:21:53.311225  652196 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-735960-m02 && echo "ha-735960-m02" | sudo tee /etc/hostname
	I0701 12:21:53.434956  652196 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-735960-m02
	
	I0701 12:21:53.434992  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:21:53.437612  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.438016  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:53.438040  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.438190  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:21:53.438418  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:53.438601  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:53.438768  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:21:53.438926  652196 main.go:141] libmachine: Using SSH client type: native
	I0701 12:21:53.439106  652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0701 12:21:53.439128  652196 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-735960-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-735960-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-735960-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0701 12:21:53.559115  652196 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0701 12:21:53.559146  652196 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19166-630650/.minikube CaCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19166-630650/.minikube}
	I0701 12:21:53.559163  652196 buildroot.go:174] setting up certificates
	I0701 12:21:53.559174  652196 provision.go:84] configureAuth start
	I0701 12:21:53.559186  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetMachineName
	I0701 12:21:53.559514  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetIP
	I0701 12:21:53.562119  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.562516  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:53.562550  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.562753  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:21:53.564741  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.565063  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:53.565082  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.565233  652196 provision.go:143] copyHostCerts
	I0701 12:21:53.565266  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem
	I0701 12:21:53.565309  652196 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem, removing ...
	I0701 12:21:53.565318  652196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem
	I0701 12:21:53.565379  652196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem (1078 bytes)
	I0701 12:21:53.565450  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem
	I0701 12:21:53.565468  652196 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem, removing ...
	I0701 12:21:53.565474  652196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem
	I0701 12:21:53.565492  652196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem (1123 bytes)
	I0701 12:21:53.565533  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem
	I0701 12:21:53.565549  652196 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem, removing ...
	I0701 12:21:53.565555  652196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem
	I0701 12:21:53.565570  652196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem (1675 bytes)
	I0701 12:21:53.565618  652196 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem org=jenkins.ha-735960-m02 san=[127.0.0.1 192.168.39.86 ha-735960-m02 localhost minikube]
	I0701 12:21:53.749696  652196 provision.go:177] copyRemoteCerts
	I0701 12:21:53.749755  652196 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0701 12:21:53.749780  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:21:53.752460  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.752780  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:53.752813  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.752952  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:21:53.753159  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:53.753385  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:21:53.753547  652196 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa Username:docker}
	I0701 12:21:53.835990  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0701 12:21:53.836060  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0701 12:21:53.858665  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0701 12:21:53.858753  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0701 12:21:53.880281  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0701 12:21:53.880367  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0701 12:21:53.902677  652196 provision.go:87] duration metric: took 343.48703ms to configureAuth
	I0701 12:21:53.902709  652196 buildroot.go:189] setting minikube options for container-runtime
	I0701 12:21:53.903020  652196 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:21:53.903053  652196 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:21:53.903351  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:21:53.905929  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.906189  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:53.906216  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.906438  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:21:53.906667  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:53.906826  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:53.906966  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:21:53.907119  652196 main.go:141] libmachine: Using SSH client type: native
	I0701 12:21:53.907282  652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0701 12:21:53.907294  652196 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0701 12:21:54.019474  652196 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0701 12:21:54.019501  652196 buildroot.go:70] root file system type: tmpfs
	I0701 12:21:54.019656  652196 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0701 12:21:54.019681  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:21:54.022816  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:54.023184  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:54.023208  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:54.023371  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:21:54.023579  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:54.023787  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:54.023946  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:21:54.024146  652196 main.go:141] libmachine: Using SSH client type: native
	I0701 12:21:54.024319  652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0701 12:21:54.024384  652196 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.39.16"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0701 12:21:54.147740  652196 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.39.16
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0701 12:21:54.147778  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:21:54.150547  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:54.151173  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:54.151208  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:54.151345  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:21:54.151561  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:54.151771  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:54.151918  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:21:54.152095  652196 main.go:141] libmachine: Using SSH client type: native
	I0701 12:21:54.152266  652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0701 12:21:54.152281  652196 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0701 12:21:56.028628  652196 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0701 12:21:56.028682  652196 machine.go:97] duration metric: took 2.834118436s to provisionDockerMachine
	I0701 12:21:56.028701  652196 start.go:293] postStartSetup for "ha-735960-m02" (driver="kvm2")
	I0701 12:21:56.028716  652196 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0701 12:21:56.028738  652196 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:21:56.029099  652196 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0701 12:21:56.029132  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:21:56.031882  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:56.032264  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:56.032289  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:56.032433  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:21:56.032608  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:56.032817  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:21:56.032971  652196 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa Username:docker}
	I0701 12:21:56.117309  652196 ssh_runner.go:195] Run: cat /etc/os-release
	I0701 12:21:56.121231  652196 info.go:137] Remote host: Buildroot 2023.02.9
	I0701 12:21:56.121263  652196 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-630650/.minikube/addons for local assets ...
	I0701 12:21:56.121324  652196 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-630650/.minikube/files for local assets ...
	I0701 12:21:56.121391  652196 filesync.go:149] local asset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> 6378542.pem in /etc/ssl/certs
	I0701 12:21:56.121402  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> /etc/ssl/certs/6378542.pem
	I0701 12:21:56.121478  652196 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0701 12:21:56.130302  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem --> /etc/ssl/certs/6378542.pem (1708 bytes)
	I0701 12:21:56.152776  652196 start.go:296] duration metric: took 124.058691ms for postStartSetup
	I0701 12:21:56.152821  652196 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:21:56.153142  652196 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0701 12:21:56.153170  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:21:56.155689  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:56.156094  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:56.156120  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:56.156332  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:21:56.156555  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:56.156727  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:21:56.156917  652196 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa Username:docker}
	I0701 12:21:56.240391  652196 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0701 12:21:56.240454  652196 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0701 12:21:56.280843  652196 fix.go:56] duration metric: took 18.651393475s for fixHost
	I0701 12:21:56.280895  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:21:56.283268  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:56.283590  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:56.283617  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:56.283860  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:21:56.284107  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:56.284307  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:56.284501  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:21:56.284686  652196 main.go:141] libmachine: Using SSH client type: native
	I0701 12:21:56.284888  652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0701 12:21:56.284903  652196 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0701 12:21:56.398873  652196 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719836516.359963406
	
	I0701 12:21:56.398893  652196 fix.go:216] guest clock: 1719836516.359963406
	I0701 12:21:56.398901  652196 fix.go:229] Guest: 2024-07-01 12:21:56.359963406 +0000 UTC Remote: 2024-07-01 12:21:56.280872467 +0000 UTC m=+42.319261894 (delta=79.090939ms)
	I0701 12:21:56.398919  652196 fix.go:200] guest clock delta is within tolerance: 79.090939ms
	I0701 12:21:56.398924  652196 start.go:83] releasing machines lock for "ha-735960-m02", held for 18.769503298s
	I0701 12:21:56.398940  652196 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:21:56.399198  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetIP
	I0701 12:21:56.401982  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:56.402404  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:56.402436  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:56.404680  652196 out.go:177] * Found network options:
	I0701 12:21:56.406167  652196 out.go:177]   - NO_PROXY=192.168.39.16
	W0701 12:21:56.407620  652196 proxy.go:119] fail to check proxy env: Error ip not in block
	I0701 12:21:56.407664  652196 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:21:56.408285  652196 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:21:56.408498  652196 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:21:56.408606  652196 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0701 12:21:56.408647  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	W0701 12:21:56.408741  652196 proxy.go:119] fail to check proxy env: Error ip not in block
	I0701 12:21:56.408826  652196 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0701 12:21:56.408849  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:21:56.411170  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:56.411559  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:56.411598  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:56.411651  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:56.411933  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:21:56.412130  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:56.412221  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:56.412247  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:56.412295  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:21:56.412519  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:21:56.412508  652196 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa Username:docker}
	I0701 12:21:56.412720  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:56.412871  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:21:56.412987  652196 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa Username:docker}
	W0701 12:21:56.492511  652196 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0701 12:21:56.492595  652196 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0701 12:21:56.515270  652196 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0701 12:21:56.515305  652196 start.go:494] detecting cgroup driver to use...
	I0701 12:21:56.515419  652196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 12:21:56.549004  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0701 12:21:56.560711  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0701 12:21:56.578763  652196 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0701 12:21:56.578832  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0701 12:21:56.589742  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0701 12:21:56.606645  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0701 12:21:56.620036  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0701 12:21:56.632033  652196 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0701 12:21:56.642458  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0701 12:21:56.653078  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0701 12:21:56.663035  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0701 12:21:56.673203  652196 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0701 12:21:56.682348  652196 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0701 12:21:56.691388  652196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:21:56.798709  652196 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0701 12:21:56.821386  652196 start.go:494] detecting cgroup driver to use...
	I0701 12:21:56.821493  652196 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0701 12:21:56.841303  652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 12:21:56.857934  652196 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0701 12:21:56.877318  652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 12:21:56.889777  652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 12:21:56.901844  652196 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0701 12:21:56.927595  652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 12:21:56.940849  652196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 12:21:56.958116  652196 ssh_runner.go:195] Run: which cri-dockerd
	I0701 12:21:56.961664  652196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0701 12:21:56.969985  652196 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0701 12:21:56.985048  652196 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0701 12:21:57.096072  652196 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0701 12:21:57.211289  652196 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0701 12:21:57.211354  652196 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0701 12:21:57.227069  652196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:21:57.341292  652196 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0701 12:22:58.423195  652196 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.08185799s)
	I0701 12:22:58.423268  652196 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0701 12:22:58.444321  652196 out.go:177] 
	W0701 12:22:58.445678  652196 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 01 12:21:54 ha-735960-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 01 12:21:54 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:54.524329635Z" level=info msg="Starting up"
	Jul 01 12:21:54 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:54.525054987Z" level=info msg="containerd not running, starting managed containerd"
	Jul 01 12:21:54 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:54.525787354Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=513
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.553695593Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.572290393Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.572432449Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.572518940Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.572558429Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.572981597Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.573093539Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.573355911Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.573425452Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.573469593Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.573505057Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.573782642Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.574848351Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.576951334Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.577031827Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.577253828Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.577304329Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.577551634Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.577624370Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.577665230Z" level=info msg="metadata content store policy set" policy=shared
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.580979416Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.581128476Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.581284824Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.581371031Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.581432559Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.581524784Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.581996275Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582118070Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582162131Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582245548Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582319648Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582368655Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582407448Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582445279Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582484550Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582521928Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582558472Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582601035Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582656126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582693985Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582741537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582779033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582815513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582853076Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582892671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582938669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582980248Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583032987Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583083364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583122445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583161506Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583262727Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583333396Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583373579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583414811Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583520612Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583751718Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583800626Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583838317Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583874340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583912430Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583991424Z" level=info msg="NRI interface is disabled by configuration."
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.584364167Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.584467963Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.584654486Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.584785754Z" level=info msg="containerd successfully booted in 0.032655s"
	Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.555699119Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.620790434Z" level=info msg="Loading containers: start."
	Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.813021303Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.888534738Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.940299653Z" level=info msg="Loading containers: done."
	Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.956534314Z" level=info msg="Docker daemon" commit=ff1e2c0 containerd-snapshotter=false storage-driver=overlay2 version=27.0.1
	Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.956851438Z" level=info msg="Daemon has completed initialization"
	Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.988054435Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.988129188Z" level=info msg="API listen on [::]:2376"
	Jul 01 12:21:55 ha-735960-m02 systemd[1]: Started Docker Application Container Engine.
	Jul 01 12:21:57 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:57.316115209Z" level=info msg="Processing signal 'terminated'"
	Jul 01 12:21:57 ha-735960-m02 systemd[1]: Stopping Docker Application Container Engine...
	Jul 01 12:21:57 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:57.317321834Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 01 12:21:57 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:57.317386191Z" level=info msg="Daemon shutdown complete"
	Jul 01 12:21:57 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:57.317447382Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 01 12:21:57 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:57.317464543Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 01 12:21:58 ha-735960-m02 systemd[1]: docker.service: Deactivated successfully.
	Jul 01 12:21:58 ha-735960-m02 systemd[1]: Stopped Docker Application Container Engine.
	Jul 01 12:21:58 ha-735960-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 01 12:21:58 ha-735960-m02 dockerd[1188]: time="2024-07-01T12:21:58.364754006Z" level=info msg="Starting up"
	Jul 01 12:22:58 ha-735960-m02 dockerd[1188]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 01 12:22:58 ha-735960-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 01 12:22:58 ha-735960-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 01 12:22:58 ha-735960-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0701 12:22:58.445741  652196 out.go:239] * 
	W0701 12:22:58.447325  652196 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 12:22:58.449434  652196 out.go:177] 
	
	
	==> Docker <==
	Jul 01 12:21:44 ha-735960 dockerd[1190]: time="2024-07-01T12:21:44.208507474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 12:22:05 ha-735960 dockerd[1184]: time="2024-07-01T12:22:05.425890009Z" level=info msg="ignoring event" container=d97b6df80577316a9cf70b2af0f8d52bb2bd7071ff932a8f1f03df9497724786 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 01 12:22:05 ha-735960 dockerd[1190]: time="2024-07-01T12:22:05.426406022Z" level=info msg="shim disconnected" id=d97b6df80577316a9cf70b2af0f8d52bb2bd7071ff932a8f1f03df9497724786 namespace=moby
	Jul 01 12:22:05 ha-735960 dockerd[1190]: time="2024-07-01T12:22:05.427162251Z" level=warning msg="cleaning up after shim disconnected" id=d97b6df80577316a9cf70b2af0f8d52bb2bd7071ff932a8f1f03df9497724786 namespace=moby
	Jul 01 12:22:05 ha-735960 dockerd[1190]: time="2024-07-01T12:22:05.427275716Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 01 12:22:06 ha-735960 dockerd[1190]: time="2024-07-01T12:22:06.439101176Z" level=info msg="shim disconnected" id=ad4259a9c8ee03ff4c6910c68c5c866481fede150d57267cdc957e46aca4ef80 namespace=moby
	Jul 01 12:22:06 ha-735960 dockerd[1184]: time="2024-07-01T12:22:06.441768147Z" level=info msg="ignoring event" container=ad4259a9c8ee03ff4c6910c68c5c866481fede150d57267cdc957e46aca4ef80 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 01 12:22:06 ha-735960 dockerd[1190]: time="2024-07-01T12:22:06.442054407Z" level=warning msg="cleaning up after shim disconnected" id=ad4259a9c8ee03ff4c6910c68c5c866481fede150d57267cdc957e46aca4ef80 namespace=moby
	Jul 01 12:22:06 ha-735960 dockerd[1190]: time="2024-07-01T12:22:06.442214156Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 01 12:22:24 ha-735960 dockerd[1190]: time="2024-07-01T12:22:24.071877635Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 01 12:22:24 ha-735960 dockerd[1190]: time="2024-07-01T12:22:24.072398316Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 01 12:22:24 ha-735960 dockerd[1190]: time="2024-07-01T12:22:24.072506177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 12:22:24 ha-735960 dockerd[1190]: time="2024-07-01T12:22:24.072761669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 12:22:27 ha-735960 dockerd[1190]: time="2024-07-01T12:22:27.091757274Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 01 12:22:27 ha-735960 dockerd[1190]: time="2024-07-01T12:22:27.091819785Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 01 12:22:27 ha-735960 dockerd[1190]: time="2024-07-01T12:22:27.091834055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 12:22:27 ha-735960 dockerd[1190]: time="2024-07-01T12:22:27.092367194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 12:22:47 ha-735960 dockerd[1184]: time="2024-07-01T12:22:47.577930706Z" level=info msg="ignoring event" container=e546c39248bc8ab525701bacc0a354650e0da853981a4624abd25fdba1c1ca30 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 01 12:22:47 ha-735960 dockerd[1190]: time="2024-07-01T12:22:47.578670317Z" level=info msg="shim disconnected" id=e546c39248bc8ab525701bacc0a354650e0da853981a4624abd25fdba1c1ca30 namespace=moby
	Jul 01 12:22:47 ha-735960 dockerd[1190]: time="2024-07-01T12:22:47.578983718Z" level=warning msg="cleaning up after shim disconnected" id=e546c39248bc8ab525701bacc0a354650e0da853981a4624abd25fdba1c1ca30 namespace=moby
	Jul 01 12:22:47 ha-735960 dockerd[1190]: time="2024-07-01T12:22:47.579585559Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 01 12:22:48 ha-735960 dockerd[1184]: time="2024-07-01T12:22:48.582829662Z" level=info msg="ignoring event" container=829fe19c75ce30a13d3a4e33a2e2a760477739dbe6f611f1b4e60d69d0444f4d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 01 12:22:48 ha-735960 dockerd[1190]: time="2024-07-01T12:22:48.583282892Z" level=info msg="shim disconnected" id=829fe19c75ce30a13d3a4e33a2e2a760477739dbe6f611f1b4e60d69d0444f4d namespace=moby
	Jul 01 12:22:48 ha-735960 dockerd[1190]: time="2024-07-01T12:22:48.584157023Z" level=warning msg="cleaning up after shim disconnected" id=829fe19c75ce30a13d3a4e33a2e2a760477739dbe6f611f1b4e60d69d0444f4d namespace=moby
	Jul 01 12:22:48 ha-735960 dockerd[1190]: time="2024-07-01T12:22:48.584285564Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	e546c39248bc8       56ce0fd9fb532                                                                                         32 seconds ago       Exited              kube-apiserver            2                   16dae930b4edb       kube-apiserver-ha-735960
	829fe19c75ce3       e874818b3caac                                                                                         35 seconds ago       Exited              kube-controller-manager   2                   5e2a9b91be69c       kube-controller-manager-ha-735960
	cecb3dd12e16e       38af8ddebf499                                                                                         About a minute ago   Running             kube-vip                  0                   8d1562fb4b8c3       kube-vip-ha-735960
	6a200a6b49020       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      1                   5b1097d48d724       etcd-ha-735960
	2d71437c5f06d       7820c83aa1394                                                                                         About a minute ago   Running             kube-scheduler            1                   fa7dea6a1b8bd       kube-scheduler-ha-735960
	14112a4d8f2cb       38af8ddebf499                                                                                         2 minutes ago        Exited              kube-vip                  1                   46ab74fdab7e2       kube-vip-ha-735960
	1ef6d9da6a9c5       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   4 minutes ago        Exited              busybox                   0                   1f5ccc7b0e655       busybox-fc5497c4f-pjfcw
	a9c30cd4b3455       cbb01a7bd410d                                                                                         6 minutes ago        Exited              coredns                   0                   7b4b4f7ec4b63       coredns-7db6d8ff4d-nk4lf
	769b0b8751350       cbb01a7bd410d                                                                                         6 minutes ago        Exited              coredns                   0                   7a349370d4f88       coredns-7db6d8ff4d-p4rtz
	97d58c94f3fdc       6e38f40d628db                                                                                         6 minutes ago        Exited              storage-provisioner       0                   9226633ad878a       storage-provisioner
	f472aef5302fd       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              6 minutes ago        Exited              kindnet-cni               0                   ab9c74a502295       kindnet-7f6hm
	6116abe6039dc       53c535741fb44                                                                                         6 minutes ago        Exited              kube-proxy                0                   da69191059798       kube-proxy-lphzn
	cb63d54411807       7820c83aa1394                                                                                         7 minutes ago        Exited              kube-scheduler            0                   19b6b0e6ed64e       kube-scheduler-ha-735960
	24c8926d2b31d       3861cfcd7c04c                                                                                         7 minutes ago        Exited              etcd                      0                   d3b914e19ca22       etcd-ha-735960
	
	
	==> coredns [769b0b875135] <==
	[INFO] 10.244.1.2:44221 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000082797s
	[INFO] 10.244.2.2:33797 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157729s
	[INFO] 10.244.2.2:52590 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004055351s
	[INFO] 10.244.2.2:46983 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003253494s
	[INFO] 10.244.2.2:56187 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000205215s
	[INFO] 10.244.2.2:41086 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000158307s
	[INFO] 10.244.0.4:47783 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000097077s
	[INFO] 10.244.0.4:50743 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001523s
	[INFO] 10.244.0.4:37141 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000138763s
	[INFO] 10.244.1.2:32981 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000132906s
	[INFO] 10.244.1.2:36762 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001646552s
	[INFO] 10.244.1.2:33583 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000072434s
	[INFO] 10.244.2.2:37027 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000156518s
	[INFO] 10.244.2.2:58435 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000104504s
	[INFO] 10.244.2.2:36107 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090251s
	[INFO] 10.244.0.4:44792 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000227164s
	[INFO] 10.244.0.4:56557 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000140925s
	[INFO] 10.244.1.2:38284 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000232717s
	[INFO] 10.244.2.2:37664 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135198s
	[INFO] 10.244.2.2:60876 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00032392s
	[INFO] 10.244.1.2:37461 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133264s
	[INFO] 10.244.1.2:45182 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000117372s
	[INFO] 10.244.1.2:37156 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000240093s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a9c30cd4b345] <==
	[INFO] 10.244.0.4:57095 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002251804s
	[INFO] 10.244.0.4:42381 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000081215s
	[INFO] 10.244.0.4:53499 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00124929s
	[INFO] 10.244.0.4:41287 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000174281s
	[INFO] 10.244.0.4:36433 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000142863s
	[INFO] 10.244.1.2:47688 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130034s
	[INFO] 10.244.1.2:40562 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00183587s
	[INFO] 10.244.1.2:35137 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000771s
	[INFO] 10.244.1.2:37798 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000184282s
	[INFO] 10.244.1.2:43876 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008807s
	[INFO] 10.244.2.2:35039 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119303s
	[INFO] 10.244.0.4:53229 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000090292s
	[INFO] 10.244.0.4:42097 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00011308s
	[INFO] 10.244.1.2:42114 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130767s
	[INFO] 10.244.1.2:56638 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000110707s
	[INFO] 10.244.1.2:55805 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000093484s
	[INFO] 10.244.2.2:51675 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000145117s
	[INFO] 10.244.2.2:56838 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000136843s
	[INFO] 10.244.0.4:60951 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000162889s
	[INFO] 10.244.0.4:34776 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000112367s
	[INFO] 10.244.0.4:45397 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000073771s
	[INFO] 10.244.0.4:52372 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000058127s
	[INFO] 10.244.1.2:41033 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000131962s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0701 12:22:59.377640    2586 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
	E0701 12:22:59.378152    2586 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
	E0701 12:22:59.379637    2586 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
	E0701 12:22:59.379997    2586 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
	E0701 12:22:59.381475    2586 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul 1 12:21] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050877] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036108] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.421397] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.628587] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.463440] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +4.322115] systemd-fstab-generator[476]: Ignoring "noauto" option for root device
	[  +0.057798] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060958] systemd-fstab-generator[488]: Ignoring "noauto" option for root device
	[  +2.352578] systemd-fstab-generator[1113]: Ignoring "noauto" option for root device
	[  +0.297044] systemd-fstab-generator[1150]: Ignoring "noauto" option for root device
	[  +0.121689] systemd-fstab-generator[1162]: Ignoring "noauto" option for root device
	[  +0.127513] systemd-fstab-generator[1176]: Ignoring "noauto" option for root device
	[  +2.293985] kauditd_printk_skb: 195 callbacks suppressed
	[  +0.325101] systemd-fstab-generator[1411]: Ignoring "noauto" option for root device
	[  +0.108851] systemd-fstab-generator[1423]: Ignoring "noauto" option for root device
	[  +0.138237] systemd-fstab-generator[1435]: Ignoring "noauto" option for root device
	[  +0.156114] systemd-fstab-generator[1450]: Ignoring "noauto" option for root device
	[  +0.494872] systemd-fstab-generator[1603]: Ignoring "noauto" option for root device
	[  +6.977462] kauditd_printk_skb: 176 callbacks suppressed
	[ +11.291301] kauditd_printk_skb: 40 callbacks suppressed
	
	
	==> etcd [24c8926d2b31] <==
	{"level":"info","ts":"2024-07-01T12:21:01.297933Z","caller":"traceutil/trace.go:171","msg":"trace[249123960] range","detail":"{range_begin:/registry/leases/; range_end:/registry/leases0; }","duration":"4.106112275s","start":"2024-07-01T12:20:57.191803Z","end":"2024-07-01T12:21:01.297915Z","steps":["trace[249123960] 'agreement among raft nodes before linearized reading'  (duration: 4.10601913s)"],"step_count":1}
	{"level":"warn","ts":"2024-07-01T12:21:01.298006Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-01T12:20:57.191796Z","time spent":"4.106166982s","remote":"127.0.0.1:56240","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":0,"response size":0,"request content":"key:\"/registry/leases/\" range_end:\"/registry/leases0\" count_only:true "}
	2024/07/01 12:21:01 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/01 12:21:01 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/01 12:21:01 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-01T12:21:01.381902Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.16:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-01T12:21:01.38194Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.16:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-01T12:21:01.38203Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"b6c76b3131c1024","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-01T12:21:01.382382Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"c77bbbee62c21090"}
	{"level":"info","ts":"2024-07-01T12:21:01.382398Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"c77bbbee62c21090"}
	{"level":"info","ts":"2024-07-01T12:21:01.38247Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"c77bbbee62c21090"}
	{"level":"info","ts":"2024-07-01T12:21:01.382583Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b6c76b3131c1024","remote-peer-id":"c77bbbee62c21090"}
	{"level":"info","ts":"2024-07-01T12:21:01.382685Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b6c76b3131c1024","remote-peer-id":"c77bbbee62c21090"}
	{"level":"info","ts":"2024-07-01T12:21:01.382809Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b6c76b3131c1024","remote-peer-id":"c77bbbee62c21090"}
	{"level":"info","ts":"2024-07-01T12:21:01.382826Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"c77bbbee62c21090"}
	{"level":"info","ts":"2024-07-01T12:21:01.382832Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"77557cf66c24e9ff"}
	{"level":"info","ts":"2024-07-01T12:21:01.382882Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"77557cf66c24e9ff"}
	{"level":"info","ts":"2024-07-01T12:21:01.3829Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"77557cf66c24e9ff"}
	{"level":"info","ts":"2024-07-01T12:21:01.385706Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b6c76b3131c1024","remote-peer-id":"77557cf66c24e9ff"}
	{"level":"info","ts":"2024-07-01T12:21:01.385804Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b6c76b3131c1024","remote-peer-id":"77557cf66c24e9ff"}
	{"level":"info","ts":"2024-07-01T12:21:01.385838Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b6c76b3131c1024","remote-peer-id":"77557cf66c24e9ff"}
	{"level":"info","ts":"2024-07-01T12:21:01.385849Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"77557cf66c24e9ff"}
	{"level":"info","ts":"2024-07-01T12:21:01.406065Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.16:2380"}
	{"level":"info","ts":"2024-07-01T12:21:01.406193Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.16:2380"}
	{"level":"info","ts":"2024-07-01T12:21:01.406214Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-735960","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.16:2380"],"advertise-client-urls":["https://192.168.39.16:2379"]}
	
	
	==> etcd [6a200a6b4902] <==
	{"level":"info","ts":"2024-07-01T12:22:54.688918Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-01T12:22:54.689365Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-01T12:22:54.689616Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 received MsgPreVoteResp from b6c76b3131c1024 at term 2"}
	{"level":"info","ts":"2024-07-01T12:22:54.689896Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 [logterm: 2, index: 1880] sent MsgPreVote request to 77557cf66c24e9ff at term 2"}
	{"level":"info","ts":"2024-07-01T12:22:54.689984Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 [logterm: 2, index: 1880] sent MsgPreVote request to c77bbbee62c21090 at term 2"}
	{"level":"warn","ts":"2024-07-01T12:22:54.766483Z","caller":"etcdserver/server.go:2089","msg":"failed to publish local member to cluster through raft","local-member-id":"b6c76b3131c1024","local-member-attributes":"{Name:ha-735960 ClientURLs:[https://192.168.39.16:2379]}","request-path":"/0/members/b6c76b3131c1024/attributes","publish-timeout":"7s","error":"etcdserver: request timed out"}
	{"level":"warn","ts":"2024-07-01T12:22:54.810935Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"c77bbbee62c21090","rtt":"0s","error":"dial tcp 192.168.39.86:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-01T12:22:54.81101Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c77bbbee62c21090","rtt":"0s","error":"dial tcp 192.168.39.86:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-01T12:22:54.827555Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"77557cf66c24e9ff","rtt":"0s","error":"dial tcp 192.168.39.97:2380: i/o timeout"}
	{"level":"warn","ts":"2024-07-01T12:22:54.827561Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"77557cf66c24e9ff","rtt":"0s","error":"dial tcp 192.168.39.97:2380: i/o timeout"}
	{"level":"info","ts":"2024-07-01T12:22:56.088711Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-01T12:22:56.088779Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-01T12:22:56.088792Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 received MsgPreVoteResp from b6c76b3131c1024 at term 2"}
	{"level":"info","ts":"2024-07-01T12:22:56.088806Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 [logterm: 2, index: 1880] sent MsgPreVote request to 77557cf66c24e9ff at term 2"}
	{"level":"info","ts":"2024-07-01T12:22:56.088813Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 [logterm: 2, index: 1880] sent MsgPreVote request to c77bbbee62c21090 at term 2"}
	{"level":"info","ts":"2024-07-01T12:22:57.488845Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-01T12:22:57.488894Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-01T12:22:57.488907Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 received MsgPreVoteResp from b6c76b3131c1024 at term 2"}
	{"level":"info","ts":"2024-07-01T12:22:57.488922Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 [logterm: 2, index: 1880] sent MsgPreVote request to 77557cf66c24e9ff at term 2"}
	{"level":"info","ts":"2024-07-01T12:22:57.488929Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 [logterm: 2, index: 1880] sent MsgPreVote request to c77bbbee62c21090 at term 2"}
	{"level":"info","ts":"2024-07-01T12:22:58.888088Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-01T12:22:58.888193Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-01T12:22:58.888234Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 received MsgPreVoteResp from b6c76b3131c1024 at term 2"}
	{"level":"info","ts":"2024-07-01T12:22:58.888281Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 [logterm: 2, index: 1880] sent MsgPreVote request to 77557cf66c24e9ff at term 2"}
	{"level":"info","ts":"2024-07-01T12:22:58.888295Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 [logterm: 2, index: 1880] sent MsgPreVote request to c77bbbee62c21090 at term 2"}
	
	
	==> kernel <==
	 12:22:59 up 1 min,  0 users,  load average: 0.14, 0.07, 0.02
	Linux ha-735960 5.10.207 #1 SMP Wed Jun 26 19:37:34 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [f472aef5302f] <==
	I0701 12:20:12.428842       1 main.go:250] Node ha-735960-m04 has CIDR [10.244.3.0/24] 
	I0701 12:20:22.443154       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0701 12:20:22.443292       1 main.go:227] handling current node
	I0701 12:20:22.443323       1 main.go:223] Handling node with IPs: map[192.168.39.86:{}]
	I0701 12:20:22.443388       1 main.go:250] Node ha-735960-m02 has CIDR [10.244.1.0/24] 
	I0701 12:20:22.443605       1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
	I0701 12:20:22.443653       1 main.go:250] Node ha-735960-m03 has CIDR [10.244.2.0/24] 
	I0701 12:20:22.443793       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0701 12:20:22.443836       1 main.go:250] Node ha-735960-m04 has CIDR [10.244.3.0/24] 
	I0701 12:20:32.451395       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0701 12:20:32.451431       1 main.go:227] handling current node
	I0701 12:20:32.451481       1 main.go:223] Handling node with IPs: map[192.168.39.86:{}]
	I0701 12:20:32.451486       1 main.go:250] Node ha-735960-m02 has CIDR [10.244.1.0/24] 
	I0701 12:20:32.451947       1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
	I0701 12:20:32.451980       1 main.go:250] Node ha-735960-m03 has CIDR [10.244.2.0/24] 
	I0701 12:20:32.452873       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0701 12:20:32.453015       1 main.go:250] Node ha-735960-m04 has CIDR [10.244.3.0/24] 
	I0701 12:20:42.470169       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0701 12:20:42.470264       1 main.go:227] handling current node
	I0701 12:20:42.470289       1 main.go:223] Handling node with IPs: map[192.168.39.86:{}]
	I0701 12:20:42.470302       1 main.go:250] Node ha-735960-m02 has CIDR [10.244.1.0/24] 
	I0701 12:20:42.470523       1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
	I0701 12:20:42.470616       1 main.go:250] Node ha-735960-m03 has CIDR [10.244.2.0/24] 
	I0701 12:20:42.470868       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0701 12:20:42.470914       1 main.go:250] Node ha-735960-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [e546c39248bc] <==
	I0701 12:22:27.228496       1 options.go:221] external host was not specified, using 192.168.39.16
	I0701 12:22:27.229584       1 server.go:148] Version: v1.30.2
	I0701 12:22:27.229706       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0701 12:22:27.544729       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0701 12:22:27.547846       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0701 12:22:27.551600       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0701 12:22:27.551634       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0701 12:22:27.551982       1 instance.go:299] Using reconciler: lease
	W0701 12:22:47.544372       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0701 12:22:47.544664       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0701 12:22:47.553171       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [829fe19c75ce] <==
	I0701 12:22:24.521097       1 serving.go:380] Generated self-signed cert in-memory
	I0701 12:22:24.837441       1 controllermanager.go:189] "Starting" version="v1.30.2"
	I0701 12:22:24.837478       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0701 12:22:24.839276       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0701 12:22:24.839470       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0701 12:22:24.839988       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0701 12:22:24.840049       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0701 12:22:48.561111       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.16:8443/healthz\": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:57228->192.168.39.16:8443: read: connection reset by peer"
	
	
	==> kube-proxy [6116abe6039d] <==
	I0701 12:16:09.205590       1 server_linux.go:69] "Using iptables proxy"
	I0701 12:16:09.223098       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.16"]
	I0701 12:16:09.284088       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0701 12:16:09.284134       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0701 12:16:09.284152       1 server_linux.go:165] "Using iptables Proxier"
	I0701 12:16:09.286802       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0701 12:16:09.287240       1 server.go:872] "Version info" version="v1.30.2"
	I0701 12:16:09.287274       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0701 12:16:09.288803       1 config.go:192] "Starting service config controller"
	I0701 12:16:09.288830       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0701 12:16:09.289262       1 config.go:101] "Starting endpoint slice config controller"
	I0701 12:16:09.289283       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0701 12:16:09.290101       1 config.go:319] "Starting node config controller"
	I0701 12:16:09.290125       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0701 12:16:09.389941       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0701 12:16:09.390030       1 shared_informer.go:320] Caches are synced for service config
	I0701 12:16:09.390393       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2d71437c5f06] <==
	Trace[1841834859]: ---"Objects listed" error:Get "https://192.168.39.16:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:57242->192.168.39.16:8443: read: connection reset by peer 10642ms (12:22:48.563)
	Trace[1841834859]: [10.642423199s] [10.642423199s] END
	E0701 12:22:48.563438       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.16:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:57242->192.168.39.16:8443: read: connection reset by peer
	W0701 12:22:48.563506       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.16:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:59182->192.168.39.16:8443: read: connection reset by peer
	E0701 12:22:48.563570       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.16:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:59182->192.168.39.16:8443: read: connection reset by peer
	W0701 12:22:48.563641       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.16:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:59186->192.168.39.16:8443: read: connection reset by peer
	E0701 12:22:48.563665       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.16:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:59186->192.168.39.16:8443: read: connection reset by peer
	W0701 12:22:48.563724       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.16:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:59188->192.168.39.16:8443: read: connection reset by peer
	E0701 12:22:48.563747       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.16:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:59188->192.168.39.16:8443: read: connection reset by peer
	W0701 12:22:48.563814       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.16:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:59202->192.168.39.16:8443: read: connection reset by peer
	E0701 12:22:48.563830       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.16:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:59202->192.168.39.16:8443: read: connection reset by peer
	W0701 12:22:48.563886       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.16:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:59238->192.168.39.16:8443: read: connection reset by peer
	E0701 12:22:48.563907       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.16:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:59238->192.168.39.16:8443: read: connection reset by peer
	W0701 12:22:48.563967       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.16:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:59262->192.168.39.16:8443: read: connection reset by peer
	E0701 12:22:48.563982       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.16:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:59262->192.168.39.16:8443: read: connection reset by peer
	W0701 12:22:48.563997       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.16:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:59210->192.168.39.16:8443: read: connection reset by peer
	E0701 12:22:48.564229       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.16:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:59210->192.168.39.16:8443: read: connection reset by peer
	W0701 12:22:48.669137       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.16:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	E0701 12:22:48.669192       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.16:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	W0701 12:22:51.792652       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.16:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	E0701 12:22:51.792757       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.16:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	W0701 12:22:52.248014       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.16:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	E0701 12:22:52.248063       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.16:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	W0701 12:22:55.201032       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.16:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	E0701 12:22:55.201141       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.16:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	
	
	==> kube-scheduler [cb63d5441180] <==
	W0701 12:15:50.916180       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0701 12:15:50.916379       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0701 12:15:51.752711       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0701 12:15:51.752853       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0701 12:15:51.794007       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0701 12:15:51.794055       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0701 12:15:51.931391       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0701 12:15:51.931434       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0701 12:15:51.950120       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0701 12:15:51.950162       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0701 12:15:51.968922       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0701 12:15:51.969125       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0701 12:15:51.985991       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0701 12:15:51.986032       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0701 12:15:52.054298       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0701 12:15:52.054329       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0701 12:15:52.260873       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0701 12:15:52.260979       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0701 12:15:54.206866       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0701 12:19:09.710917       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-xv95g\": pod kube-proxy-xv95g is already assigned to node \"ha-735960-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-xv95g" node="ha-735960-m04"
	E0701 12:19:09.713930       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-xv95g\": pod kube-proxy-xv95g is already assigned to node \"ha-735960-m04\"" pod="kube-system/kube-proxy-xv95g"
	I0701 12:21:01.200143       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0701 12:21:01.200254       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0701 12:21:01.200659       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0701 12:21:01.212693       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 01 12:22:42 ha-735960 kubelet[1610]: I0701 12:22:42.360672    1610 kubelet_node_status.go:73] "Attempting to register node" node="ha-735960"
	Jul 01 12:22:44 ha-735960 kubelet[1610]: E0701 12:22:44.574795    1610 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.254:8443: connect: no route to host" node="ha-735960"
	Jul 01 12:22:44 ha-735960 kubelet[1610]: E0701 12:22:44.574858    1610 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-735960?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host" interval="7s"
	Jul 01 12:22:47 ha-735960 kubelet[1610]: E0701 12:22:47.092648    1610 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-735960\" not found"
	Jul 01 12:22:47 ha-735960 kubelet[1610]: E0701 12:22:47.646121    1610 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host" event="&Event{ObjectMeta:{ha-735960.17de162e90ad8f5f  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-735960,UID:ha-735960,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ha-735960,},FirstTimestamp:2024-07-01 12:21:36.953708383 +0000 UTC m=+0.183371310,LastTimestamp:2024-07-01 12:21:36.953708383 +0000 UTC m=+0.183371310,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-735960,}"
	Jul 01 12:22:48 ha-735960 kubelet[1610]: I0701 12:22:48.159877    1610 scope.go:117] "RemoveContainer" containerID="d97b6df80577316a9cf70b2af0f8d52bb2bd7071ff932a8f1f03df9497724786"
	Jul 01 12:22:48 ha-735960 kubelet[1610]: I0701 12:22:48.161197    1610 scope.go:117] "RemoveContainer" containerID="e546c39248bc8ab525701bacc0a354650e0da853981a4624abd25fdba1c1ca30"
	Jul 01 12:22:48 ha-735960 kubelet[1610]: E0701 12:22:48.162173    1610 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-735960_kube-system(858bfcad8b1d02b8cdc3dc83c4af060c)\"" pod="kube-system/kube-apiserver-ha-735960" podUID="858bfcad8b1d02b8cdc3dc83c4af060c"
	Jul 01 12:22:49 ha-735960 kubelet[1610]: I0701 12:22:49.180032    1610 scope.go:117] "RemoveContainer" containerID="ad4259a9c8ee03ff4c6910c68c5c866481fede150d57267cdc957e46aca4ef80"
	Jul 01 12:22:49 ha-735960 kubelet[1610]: I0701 12:22:49.181799    1610 scope.go:117] "RemoveContainer" containerID="829fe19c75ce30a13d3a4e33a2e2a760477739dbe6f611f1b4e60d69d0444f4d"
	Jul 01 12:22:49 ha-735960 kubelet[1610]: E0701 12:22:49.182112    1610 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-735960_kube-system(9a545edc3c0d885e2370d3a24ff8ac4b)\"" pod="kube-system/kube-controller-manager-ha-735960" podUID="9a545edc3c0d885e2370d3a24ff8ac4b"
	Jul 01 12:22:50 ha-735960 kubelet[1610]: I0701 12:22:50.089167    1610 scope.go:117] "RemoveContainer" containerID="e546c39248bc8ab525701bacc0a354650e0da853981a4624abd25fdba1c1ca30"
	Jul 01 12:22:50 ha-735960 kubelet[1610]: E0701 12:22:50.089722    1610 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-735960_kube-system(858bfcad8b1d02b8cdc3dc83c4af060c)\"" pod="kube-system/kube-apiserver-ha-735960" podUID="858bfcad8b1d02b8cdc3dc83c4af060c"
	Jul 01 12:22:50 ha-735960 kubelet[1610]: I0701 12:22:50.202365    1610 scope.go:117] "RemoveContainer" containerID="829fe19c75ce30a13d3a4e33a2e2a760477739dbe6f611f1b4e60d69d0444f4d"
	Jul 01 12:22:50 ha-735960 kubelet[1610]: E0701 12:22:50.202700    1610 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-735960_kube-system(9a545edc3c0d885e2370d3a24ff8ac4b)\"" pod="kube-system/kube-controller-manager-ha-735960" podUID="9a545edc3c0d885e2370d3a24ff8ac4b"
	Jul 01 12:22:51 ha-735960 kubelet[1610]: I0701 12:22:51.209935    1610 scope.go:117] "RemoveContainer" containerID="829fe19c75ce30a13d3a4e33a2e2a760477739dbe6f611f1b4e60d69d0444f4d"
	Jul 01 12:22:51 ha-735960 kubelet[1610]: E0701 12:22:51.210647    1610 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-735960_kube-system(9a545edc3c0d885e2370d3a24ff8ac4b)\"" pod="kube-system/kube-controller-manager-ha-735960" podUID="9a545edc3c0d885e2370d3a24ff8ac4b"
	Jul 01 12:22:51 ha-735960 kubelet[1610]: I0701 12:22:51.576067    1610 kubelet_node_status.go:73] "Attempting to register node" node="ha-735960"
	Jul 01 12:22:53 ha-735960 kubelet[1610]: I0701 12:22:53.728933    1610 scope.go:117] "RemoveContainer" containerID="e546c39248bc8ab525701bacc0a354650e0da853981a4624abd25fdba1c1ca30"
	Jul 01 12:22:53 ha-735960 kubelet[1610]: E0701 12:22:53.729329    1610 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-735960_kube-system(858bfcad8b1d02b8cdc3dc83c4af060c)\"" pod="kube-system/kube-apiserver-ha-735960" podUID="858bfcad8b1d02b8cdc3dc83c4af060c"
	Jul 01 12:22:53 ha-735960 kubelet[1610]: E0701 12:22:53.789831    1610 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.254:8443: connect: no route to host" node="ha-735960"
	Jul 01 12:22:53 ha-735960 kubelet[1610]: E0701 12:22:53.790000    1610 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-735960?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host" interval="7s"
	Jul 01 12:22:56 ha-735960 kubelet[1610]: W0701 12:22:56.862031    1610 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	Jul 01 12:22:56 ha-735960 kubelet[1610]: E0701 12:22:56.862122    1610 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	Jul 01 12:22:57 ha-735960 kubelet[1610]: E0701 12:22:57.094040    1610 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-735960\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-735960 -n ha-735960
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-735960 -n ha-735960: exit status 2 (229.492615ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-735960" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (146.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (1.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-735960 node delete m03 -v=7 --alsologtostderr: exit status 83 (138.543693ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-735960-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-735960"

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 12:22:59.997400  652837 out.go:291] Setting OutFile to fd 1 ...
	I0701 12:22:59.997920  652837 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 12:22:59.997937  652837 out.go:304] Setting ErrFile to fd 2...
	I0701 12:22:59.997944  652837 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 12:22:59.998423  652837 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19166-630650/.minikube/bin
	I0701 12:22:59.998939  652837 mustload.go:65] Loading cluster: ha-735960
	I0701 12:22:59.999343  652837 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:22:59.999732  652837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:22:59.999771  652837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:23:00.014934  652837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34811
	I0701 12:23:00.015389  652837 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:23:00.015967  652837 main.go:141] libmachine: Using API Version  1
	I0701 12:23:00.015997  652837 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:23:00.016376  652837 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:23:00.016590  652837 main.go:141] libmachine: (ha-735960) Calling .GetState
	I0701 12:23:00.017995  652837 host.go:66] Checking if "ha-735960" exists ...
	I0701 12:23:00.018287  652837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:23:00.018321  652837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:23:00.033284  652837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46519
	I0701 12:23:00.033734  652837 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:23:00.034298  652837 main.go:141] libmachine: Using API Version  1
	I0701 12:23:00.034323  652837 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:23:00.034672  652837 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:23:00.034875  652837 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:23:00.035429  652837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:23:00.035476  652837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:23:00.050393  652837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34175
	I0701 12:23:00.050873  652837 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:23:00.051381  652837 main.go:141] libmachine: Using API Version  1
	I0701 12:23:00.051414  652837 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:23:00.051789  652837 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:23:00.051992  652837 main.go:141] libmachine: (ha-735960-m02) Calling .GetState
	I0701 12:23:00.053687  652837 host.go:66] Checking if "ha-735960-m02" exists ...
	I0701 12:23:00.054013  652837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:23:00.054072  652837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:23:00.068829  652837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41803
	I0701 12:23:00.069300  652837 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:23:00.069787  652837 main.go:141] libmachine: Using API Version  1
	I0701 12:23:00.069810  652837 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:23:00.070146  652837 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:23:00.070384  652837 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:23:00.070866  652837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:23:00.070909  652837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:23:00.086011  652837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43627
	I0701 12:23:00.086440  652837 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:23:00.086872  652837 main.go:141] libmachine: Using API Version  1
	I0701 12:23:00.086888  652837 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:23:00.087190  652837 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:23:00.087393  652837 main.go:141] libmachine: (ha-735960-m03) Calling .GetState
	I0701 12:23:00.091431  652837 out.go:177] * The control-plane node ha-735960-m03 host is not running: state=Stopped
	I0701 12:23:00.093084  652837 out.go:177]   To start a cluster, run: "minikube start -p ha-735960"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-linux-amd64 -p ha-735960 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-735960 status -v=7 --alsologtostderr: exit status 7 (431.945043ms)

                                                
                                                
-- stdout --
	ha-735960
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-735960-m02
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-735960-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-735960-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 12:23:00.137810  652879 out.go:291] Setting OutFile to fd 1 ...
	I0701 12:23:00.138055  652879 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 12:23:00.138063  652879 out.go:304] Setting ErrFile to fd 2...
	I0701 12:23:00.138067  652879 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 12:23:00.138286  652879 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19166-630650/.minikube/bin
	I0701 12:23:00.138496  652879 out.go:298] Setting JSON to false
	I0701 12:23:00.138527  652879 mustload.go:65] Loading cluster: ha-735960
	I0701 12:23:00.138636  652879 notify.go:220] Checking for updates...
	I0701 12:23:00.138936  652879 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:23:00.138951  652879 status.go:255] checking status of ha-735960 ...
	I0701 12:23:00.139341  652879 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:23:00.139384  652879 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:23:00.154275  652879 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38599
	I0701 12:23:00.154698  652879 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:23:00.155310  652879 main.go:141] libmachine: Using API Version  1
	I0701 12:23:00.155340  652879 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:23:00.155717  652879 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:23:00.155908  652879 main.go:141] libmachine: (ha-735960) Calling .GetState
	I0701 12:23:00.157581  652879 status.go:330] ha-735960 host status = "Running" (err=<nil>)
	I0701 12:23:00.157600  652879 host.go:66] Checking if "ha-735960" exists ...
	I0701 12:23:00.157979  652879 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:23:00.158024  652879 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:23:00.173033  652879 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41453
	I0701 12:23:00.173427  652879 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:23:00.173915  652879 main.go:141] libmachine: Using API Version  1
	I0701 12:23:00.173944  652879 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:23:00.174244  652879 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:23:00.174460  652879 main.go:141] libmachine: (ha-735960) Calling .GetIP
	I0701 12:23:00.177227  652879 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:23:00.177618  652879 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:23:00.177648  652879 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:23:00.177904  652879 host.go:66] Checking if "ha-735960" exists ...
	I0701 12:23:00.178191  652879 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:23:00.178222  652879 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:23:00.193881  652879 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41881
	I0701 12:23:00.194353  652879 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:23:00.194970  652879 main.go:141] libmachine: Using API Version  1
	I0701 12:23:00.194999  652879 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:23:00.195382  652879 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:23:00.195592  652879 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:23:00.195833  652879 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0701 12:23:00.195857  652879 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:23:00.198621  652879 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:23:00.199031  652879 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:23:00.199060  652879 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:23:00.199220  652879 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:23:00.199410  652879 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:23:00.199535  652879 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:23:00.199679  652879 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa Username:docker}
	I0701 12:23:00.281300  652879 ssh_runner.go:195] Run: systemctl --version
	I0701 12:23:00.291876  652879 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 12:23:00.306208  652879 kubeconfig.go:125] found "ha-735960" server: "https://192.168.39.254:8443"
	I0701 12:23:00.306253  652879 api_server.go:166] Checking apiserver status ...
	I0701 12:23:00.306319  652879 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 12:23:00.318417  652879 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 12:23:00.318446  652879 status.go:422] ha-735960 apiserver status = Running (err=<nil>)
	I0701 12:23:00.318460  652879 status.go:257] ha-735960 status: &{Name:ha-735960 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0701 12:23:00.318479  652879 status.go:255] checking status of ha-735960-m02 ...
	I0701 12:23:00.318810  652879 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:23:00.318862  652879 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:23:00.334297  652879 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45635
	I0701 12:23:00.334782  652879 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:23:00.335314  652879 main.go:141] libmachine: Using API Version  1
	I0701 12:23:00.335337  652879 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:23:00.335668  652879 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:23:00.335893  652879 main.go:141] libmachine: (ha-735960-m02) Calling .GetState
	I0701 12:23:00.337543  652879 status.go:330] ha-735960-m02 host status = "Running" (err=<nil>)
	I0701 12:23:00.337563  652879 host.go:66] Checking if "ha-735960-m02" exists ...
	I0701 12:23:00.337887  652879 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:23:00.337921  652879 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:23:00.353097  652879 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45493
	I0701 12:23:00.353530  652879 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:23:00.354008  652879 main.go:141] libmachine: Using API Version  1
	I0701 12:23:00.354027  652879 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:23:00.354346  652879 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:23:00.354534  652879 main.go:141] libmachine: (ha-735960-m02) Calling .GetIP
	I0701 12:23:00.357351  652879 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:23:00.357793  652879 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:23:00.357815  652879 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:23:00.357985  652879 host.go:66] Checking if "ha-735960-m02" exists ...
	I0701 12:23:00.358278  652879 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:23:00.358316  652879 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:23:00.372931  652879 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33751
	I0701 12:23:00.373397  652879 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:23:00.373891  652879 main.go:141] libmachine: Using API Version  1
	I0701 12:23:00.373911  652879 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:23:00.374244  652879 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:23:00.374526  652879 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:23:00.374731  652879 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0701 12:23:00.374753  652879 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:23:00.377513  652879 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:23:00.377962  652879 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:23:00.377987  652879 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:23:00.378133  652879 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:23:00.378305  652879 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:23:00.378483  652879 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:23:00.378609  652879 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa Username:docker}
	I0701 12:23:00.460937  652879 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 12:23:00.475673  652879 kubeconfig.go:125] found "ha-735960" server: "https://192.168.39.254:8443"
	I0701 12:23:00.475705  652879 api_server.go:166] Checking apiserver status ...
	I0701 12:23:00.475744  652879 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0701 12:23:00.487918  652879 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0701 12:23:00.487945  652879 status.go:422] ha-735960-m02 apiserver status = Stopped (err=<nil>)
	I0701 12:23:00.487958  652879 status.go:257] ha-735960-m02 status: &{Name:ha-735960-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0701 12:23:00.487979  652879 status.go:255] checking status of ha-735960-m03 ...
	I0701 12:23:00.488309  652879 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:23:00.488353  652879 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:23:00.503638  652879 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41641
	I0701 12:23:00.504158  652879 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:23:00.504690  652879 main.go:141] libmachine: Using API Version  1
	I0701 12:23:00.504712  652879 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:23:00.504995  652879 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:23:00.505189  652879 main.go:141] libmachine: (ha-735960-m03) Calling .GetState
	I0701 12:23:00.506810  652879 status.go:330] ha-735960-m03 host status = "Stopped" (err=<nil>)
	I0701 12:23:00.506824  652879 status.go:343] host is not running, skipping remaining checks
	I0701 12:23:00.506830  652879 status.go:257] ha-735960-m03 status: &{Name:ha-735960-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0701 12:23:00.506853  652879 status.go:255] checking status of ha-735960-m04 ...
	I0701 12:23:00.507118  652879 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:23:00.507150  652879 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:23:00.522605  652879 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37253
	I0701 12:23:00.523019  652879 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:23:00.523445  652879 main.go:141] libmachine: Using API Version  1
	I0701 12:23:00.523462  652879 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:23:00.523782  652879 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:23:00.524021  652879 main.go:141] libmachine: (ha-735960-m04) Calling .GetState
	I0701 12:23:00.525647  652879 status.go:330] ha-735960-m04 host status = "Stopped" (err=<nil>)
	I0701 12:23:00.525662  652879 status.go:343] host is not running, skipping remaining checks
	I0701 12:23:00.525668  652879 status.go:257] ha-735960-m04 status: &{Name:ha-735960-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-735960 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-735960 -n ha-735960
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-735960 -n ha-735960: exit status 2 (217.577453ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 logs -n 25
helpers_test.go:252: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-735960 ssh -n                                                                 | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n ha-735960-m02 sudo cat                                          | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | /home/docker/cp-test_ha-735960-m03_ha-735960-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-735960 cp ha-735960-m03:/home/docker/cp-test.txt                              | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m04:/home/docker/cp-test_ha-735960-m03_ha-735960-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n                                                                 | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n ha-735960-m04 sudo cat                                          | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | /home/docker/cp-test_ha-735960-m03_ha-735960-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-735960 cp testdata/cp-test.txt                                                | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n                                                                 | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-735960 cp ha-735960-m04:/home/docker/cp-test.txt                              | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2826819896/001/cp-test_ha-735960-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n                                                                 | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-735960 cp ha-735960-m04:/home/docker/cp-test.txt                              | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960:/home/docker/cp-test_ha-735960-m04_ha-735960.txt                       |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n                                                                 | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n ha-735960 sudo cat                                              | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | /home/docker/cp-test_ha-735960-m04_ha-735960.txt                                 |           |         |         |                     |                     |
	| cp      | ha-735960 cp ha-735960-m04:/home/docker/cp-test.txt                              | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m02:/home/docker/cp-test_ha-735960-m04_ha-735960-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n                                                                 | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n ha-735960-m02 sudo cat                                          | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | /home/docker/cp-test_ha-735960-m04_ha-735960-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-735960 cp ha-735960-m04:/home/docker/cp-test.txt                              | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m03:/home/docker/cp-test_ha-735960-m04_ha-735960-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n                                                                 | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n ha-735960-m03 sudo cat                                          | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | /home/docker/cp-test_ha-735960-m04_ha-735960-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-735960 node stop m02 -v=7                                                     | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-735960 node start m02 -v=7                                                    | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:20 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-735960 -v=7                                                           | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:20 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-735960 -v=7                                                                | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:20 UTC | 01 Jul 24 12:21 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-735960 --wait=true -v=7                                                    | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:21 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-735960                                                                | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:22 UTC |                     |
	| node    | ha-735960 node delete m03 -v=7                                                   | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:22 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/01 12:21:13
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0701 12:21:13.996326  652196 out.go:291] Setting OutFile to fd 1 ...
	I0701 12:21:13.996600  652196 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 12:21:13.996610  652196 out.go:304] Setting ErrFile to fd 2...
	I0701 12:21:13.996615  652196 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 12:21:13.996825  652196 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19166-630650/.minikube/bin
	I0701 12:21:13.997417  652196 out.go:298] Setting JSON to false
	I0701 12:21:13.998463  652196 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":7412,"bootTime":1719829062,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0701 12:21:13.998525  652196 start.go:139] virtualization: kvm guest
	I0701 12:21:14.000967  652196 out.go:177] * [ha-735960] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0701 12:21:14.002666  652196 out.go:177]   - MINIKUBE_LOCATION=19166
	I0701 12:21:14.002690  652196 notify.go:220] Checking for updates...
	I0701 12:21:14.005489  652196 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 12:21:14.006983  652196 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19166-630650/kubeconfig
	I0701 12:21:14.008350  652196 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19166-630650/.minikube
	I0701 12:21:14.009593  652196 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0701 12:21:14.011091  652196 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 12:21:14.012857  652196 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:21:14.012999  652196 driver.go:392] Setting default libvirt URI to qemu:///system
	I0701 12:21:14.013468  652196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:21:14.013542  652196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:21:14.028581  652196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35775
	I0701 12:21:14.028967  652196 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:21:14.029528  652196 main.go:141] libmachine: Using API Version  1
	I0701 12:21:14.029551  652196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:21:14.029916  652196 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:21:14.030116  652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:21:14.065038  652196 out.go:177] * Using the kvm2 driver based on existing profile
	I0701 12:21:14.066535  652196 start.go:297] selected driver: kvm2
	I0701 12:21:14.066551  652196 start.go:901] validating driver "kvm2" against &{Name:ha-735960 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.2 ClusterName:ha-735960 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.60 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:fal
se efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 12:21:14.066723  652196 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 12:21:14.067041  652196 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:21:14.067114  652196 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19166-630650/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0701 12:21:14.082191  652196 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0701 12:21:14.082920  652196 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 12:21:14.082959  652196 cni.go:84] Creating CNI manager for ""
	I0701 12:21:14.082966  652196 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0701 12:21:14.083026  652196 start.go:340] cluster config:
	{Name:ha-735960 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-735960 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.60 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false h
elm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 12:21:14.083142  652196 iso.go:125] acquiring lock: {Name:mk5c70910f61bc270c83609c48670eaf9d7e0602 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:21:14.086358  652196 out.go:177] * Starting "ha-735960" primary control-plane node in "ha-735960" cluster
	I0701 12:21:14.087757  652196 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 12:21:14.087794  652196 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19166-630650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0701 12:21:14.087805  652196 cache.go:56] Caching tarball of preloaded images
	I0701 12:21:14.087882  652196 preload.go:173] Found /home/jenkins/minikube-integration/19166-630650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0701 12:21:14.087892  652196 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0701 12:21:14.088044  652196 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
	I0701 12:21:14.088232  652196 start.go:360] acquireMachinesLock for ha-735960: {Name:mk43a6c0c0c15a237623df377ad65b82186c0ad8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:21:14.088271  652196 start.go:364] duration metric: took 21.615µs to acquireMachinesLock for "ha-735960"
	I0701 12:21:14.088285  652196 start.go:96] Skipping create...Using existing machine configuration
	I0701 12:21:14.088293  652196 fix.go:54] fixHost starting: 
	I0701 12:21:14.088547  652196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:21:14.088578  652196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:21:14.103070  652196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42935
	I0701 12:21:14.103508  652196 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:21:14.104025  652196 main.go:141] libmachine: Using API Version  1
	I0701 12:21:14.104050  652196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:21:14.104424  652196 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:21:14.104649  652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:21:14.104829  652196 main.go:141] libmachine: (ha-735960) Calling .GetState
	I0701 12:21:14.106608  652196 fix.go:112] recreateIfNeeded on ha-735960: state=Stopped err=<nil>
	I0701 12:21:14.106630  652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
	W0701 12:21:14.106790  652196 fix.go:138] unexpected machine state, will restart: <nil>
	I0701 12:21:14.108833  652196 out.go:177] * Restarting existing kvm2 VM for "ha-735960" ...
	I0701 12:21:14.110060  652196 main.go:141] libmachine: (ha-735960) Calling .Start
	I0701 12:21:14.110234  652196 main.go:141] libmachine: (ha-735960) Ensuring networks are active...
	I0701 12:21:14.110976  652196 main.go:141] libmachine: (ha-735960) Ensuring network default is active
	I0701 12:21:14.111299  652196 main.go:141] libmachine: (ha-735960) Ensuring network mk-ha-735960 is active
	I0701 12:21:14.111665  652196 main.go:141] libmachine: (ha-735960) Getting domain xml...
	I0701 12:21:14.112420  652196 main.go:141] libmachine: (ha-735960) Creating domain...
	I0701 12:21:15.307133  652196 main.go:141] libmachine: (ha-735960) Waiting to get IP...
	I0701 12:21:15.308062  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:15.308526  652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:21:15.308647  652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:15.308493  652224 retry.go:31] will retry after 239.111405ms: waiting for machine to come up
	I0701 12:21:15.549211  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:15.549648  652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:21:15.549679  652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:15.549597  652224 retry.go:31] will retry after 248.256131ms: waiting for machine to come up
	I0701 12:21:15.799054  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:15.799481  652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:21:15.799534  652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:15.799422  652224 retry.go:31] will retry after 380.468685ms: waiting for machine to come up
	I0701 12:21:16.181969  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:16.182432  652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:21:16.182634  652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:16.182540  652224 retry.go:31] will retry after 592.847587ms: waiting for machine to come up
	I0701 12:21:16.777393  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:16.777837  652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:21:16.777867  652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:16.777790  652224 retry.go:31] will retry after 639.749416ms: waiting for machine to come up
	I0701 12:21:17.419540  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:17.419941  652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:21:17.419965  652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:17.419916  652224 retry.go:31] will retry after 891.768613ms: waiting for machine to come up
	I0701 12:21:18.312967  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:18.313455  652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:21:18.313484  652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:18.313399  652224 retry.go:31] will retry after 1.112048412s: waiting for machine to come up
	I0701 12:21:19.427190  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:19.427624  652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:21:19.427655  652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:19.427568  652224 retry.go:31] will retry after 1.150138437s: waiting for machine to come up
	I0701 12:21:20.579868  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:20.580291  652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:21:20.580325  652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:20.580216  652224 retry.go:31] will retry after 1.129763596s: waiting for machine to come up
	I0701 12:21:21.711416  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:21.711892  652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:21:21.711924  652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:21.711831  652224 retry.go:31] will retry after 2.143074349s: waiting for machine to come up
	I0701 12:21:23.858081  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:23.858617  652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:21:23.858643  652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:23.858578  652224 retry.go:31] will retry after 2.436757856s: waiting for machine to come up
	I0701 12:21:26.297727  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:26.298302  652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:21:26.298352  652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:26.298269  652224 retry.go:31] will retry after 2.609229165s: waiting for machine to come up
	I0701 12:21:28.911224  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:28.911698  652196 main.go:141] libmachine: (ha-735960) Found IP for machine: 192.168.39.16
	I0701 12:21:28.911722  652196 main.go:141] libmachine: (ha-735960) Reserving static IP address...
	I0701 12:21:28.911731  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has current primary IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:28.912401  652196 main.go:141] libmachine: (ha-735960) Reserved static IP address: 192.168.39.16
	I0701 12:21:28.912425  652196 main.go:141] libmachine: (ha-735960) Waiting for SSH to be available...
	I0701 12:21:28.912468  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "ha-735960", mac: "52:54:00:6c:20:7c", ip: "192.168.39.16"} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:28.912492  652196 main.go:141] libmachine: (ha-735960) DBG | skip adding static IP to network mk-ha-735960 - found existing host DHCP lease matching {name: "ha-735960", mac: "52:54:00:6c:20:7c", ip: "192.168.39.16"}
	I0701 12:21:28.912507  652196 main.go:141] libmachine: (ha-735960) DBG | Getting to WaitForSSH function...
	I0701 12:21:28.914934  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:28.915448  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:28.915478  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:28.915627  652196 main.go:141] libmachine: (ha-735960) DBG | Using SSH client type: external
	I0701 12:21:28.915655  652196 main.go:141] libmachine: (ha-735960) DBG | Using SSH private key: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa (-rw-------)
	I0701 12:21:28.915680  652196 main.go:141] libmachine: (ha-735960) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.16 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0701 12:21:28.915698  652196 main.go:141] libmachine: (ha-735960) DBG | About to run SSH command:
	I0701 12:21:28.915730  652196 main.go:141] libmachine: (ha-735960) DBG | exit 0
	I0701 12:21:29.042314  652196 main.go:141] libmachine: (ha-735960) DBG | SSH cmd err, output: <nil>: 
	I0701 12:21:29.042747  652196 main.go:141] libmachine: (ha-735960) Calling .GetConfigRaw
	I0701 12:21:29.043414  652196 main.go:141] libmachine: (ha-735960) Calling .GetIP
	I0701 12:21:29.046291  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.046689  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:29.046714  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.046967  652196 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
	I0701 12:21:29.047187  652196 machine.go:94] provisionDockerMachine start ...
	I0701 12:21:29.047211  652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:21:29.047467  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:21:29.049524  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.049899  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:29.049924  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.050040  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:21:29.050240  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:29.050477  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:29.050669  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:21:29.050868  652196 main.go:141] libmachine: Using SSH client type: native
	I0701 12:21:29.051073  652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0701 12:21:29.051086  652196 main.go:141] libmachine: About to run SSH command:
	hostname
	I0701 12:21:29.166645  652196 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0701 12:21:29.166687  652196 main.go:141] libmachine: (ha-735960) Calling .GetMachineName
	I0701 12:21:29.166983  652196 buildroot.go:166] provisioning hostname "ha-735960"
	I0701 12:21:29.167013  652196 main.go:141] libmachine: (ha-735960) Calling .GetMachineName
	I0701 12:21:29.167232  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:21:29.169829  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.170228  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:29.170260  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.170403  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:21:29.170603  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:29.170773  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:29.170913  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:21:29.171082  652196 main.go:141] libmachine: Using SSH client type: native
	I0701 12:21:29.171259  652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0701 12:21:29.171270  652196 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-735960 && echo "ha-735960" | sudo tee /etc/hostname
	I0701 12:21:29.295697  652196 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-735960
	
	I0701 12:21:29.295728  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:21:29.298625  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.299014  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:29.299041  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.299233  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:21:29.299434  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:29.299641  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:29.299795  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:21:29.299954  652196 main.go:141] libmachine: Using SSH client type: native
	I0701 12:21:29.300149  652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0701 12:21:29.300171  652196 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-735960' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-735960/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-735960' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0701 12:21:29.418489  652196 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0701 12:21:29.418522  652196 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19166-630650/.minikube CaCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19166-630650/.minikube}
	I0701 12:21:29.418577  652196 buildroot.go:174] setting up certificates
	I0701 12:21:29.418593  652196 provision.go:84] configureAuth start
	I0701 12:21:29.418612  652196 main.go:141] libmachine: (ha-735960) Calling .GetMachineName
	I0701 12:21:29.418889  652196 main.go:141] libmachine: (ha-735960) Calling .GetIP
	I0701 12:21:29.421815  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.422238  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:29.422275  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.422477  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:21:29.424787  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.425187  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:29.425216  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.425427  652196 provision.go:143] copyHostCerts
	I0701 12:21:29.425466  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem
	I0701 12:21:29.425530  652196 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem, removing ...
	I0701 12:21:29.425542  652196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem
	I0701 12:21:29.425624  652196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem (1078 bytes)
	I0701 12:21:29.425732  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem
	I0701 12:21:29.425753  652196 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem, removing ...
	I0701 12:21:29.425758  652196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem
	I0701 12:21:29.425798  652196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem (1123 bytes)
	I0701 12:21:29.425856  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem
	I0701 12:21:29.425872  652196 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem, removing ...
	I0701 12:21:29.425877  652196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem
	I0701 12:21:29.425897  652196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem (1675 bytes)
	I0701 12:21:29.425958  652196 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem org=jenkins.ha-735960 san=[127.0.0.1 192.168.39.16 ha-735960 localhost minikube]
	I0701 12:21:29.592360  652196 provision.go:177] copyRemoteCerts
	I0701 12:21:29.592437  652196 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0701 12:21:29.592463  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:21:29.595489  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.595884  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:29.595908  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.596131  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:21:29.596356  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:29.596515  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:21:29.596646  652196 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa Username:docker}
	I0701 12:21:29.684124  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0701 12:21:29.684214  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0701 12:21:29.707185  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0701 12:21:29.707254  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0701 12:21:29.729605  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0701 12:21:29.729687  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0701 12:21:29.751505  652196 provision.go:87] duration metric: took 332.894756ms to configureAuth
	I0701 12:21:29.751536  652196 buildroot.go:189] setting minikube options for container-runtime
	I0701 12:21:29.751802  652196 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:21:29.751834  652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:21:29.752179  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:21:29.754903  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.755331  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:29.755367  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.755494  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:21:29.755709  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:29.755868  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:29.756016  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:21:29.756168  652196 main.go:141] libmachine: Using SSH client type: native
	I0701 12:21:29.756341  652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0701 12:21:29.756351  652196 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0701 12:21:29.867557  652196 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0701 12:21:29.867582  652196 buildroot.go:70] root file system type: tmpfs
	I0701 12:21:29.867738  652196 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0701 12:21:29.867768  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:21:29.870702  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.871111  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:29.871152  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.871294  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:21:29.871532  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:29.871806  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:29.871989  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:21:29.872177  652196 main.go:141] libmachine: Using SSH client type: native
	I0701 12:21:29.872347  652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0701 12:21:29.872410  652196 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0701 12:21:29.995623  652196 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0701 12:21:29.995671  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:21:29.998574  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.998969  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:29.999001  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.999184  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:21:29.999403  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:29.999598  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:29.999772  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:21:29.999916  652196 main.go:141] libmachine: Using SSH client type: native
	I0701 12:21:30.000093  652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0701 12:21:30.000109  652196 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0701 12:21:31.849411  652196 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0701 12:21:31.849452  652196 machine.go:97] duration metric: took 2.802248138s to provisionDockerMachine
	I0701 12:21:31.849473  652196 start.go:293] postStartSetup for "ha-735960" (driver="kvm2")
	I0701 12:21:31.849487  652196 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0701 12:21:31.849508  652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:21:31.849934  652196 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0701 12:21:31.849982  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:21:31.853029  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:31.853464  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:31.853494  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:31.853656  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:21:31.853877  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:31.854065  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:21:31.854242  652196 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa Username:docker}
	I0701 12:21:31.948096  652196 ssh_runner.go:195] Run: cat /etc/os-release
	I0701 12:21:31.952493  652196 info.go:137] Remote host: Buildroot 2023.02.9
	I0701 12:21:31.952522  652196 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-630650/.minikube/addons for local assets ...
	I0701 12:21:31.952580  652196 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-630650/.minikube/files for local assets ...
	I0701 12:21:31.952654  652196 filesync.go:149] local asset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> 6378542.pem in /etc/ssl/certs
	I0701 12:21:31.952664  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> /etc/ssl/certs/6378542.pem
	I0701 12:21:31.952750  652196 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0701 12:21:31.962034  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem --> /etc/ssl/certs/6378542.pem (1708 bytes)
	I0701 12:21:31.985898  652196 start.go:296] duration metric: took 136.407484ms for postStartSetup
	I0701 12:21:31.985953  652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:21:31.986287  652196 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0701 12:21:31.986316  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:21:31.988934  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:31.989328  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:31.989359  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:31.989497  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:21:31.989724  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:31.989863  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:21:31.990038  652196 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa Username:docker}
	I0701 12:21:32.076710  652196 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0701 12:21:32.076807  652196 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0701 12:21:32.133792  652196 fix.go:56] duration metric: took 18.045488816s for fixHost
	I0701 12:21:32.133863  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:21:32.136703  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:32.137078  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:32.137110  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:32.137321  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:21:32.137591  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:32.137793  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:32.137963  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:21:32.138201  652196 main.go:141] libmachine: Using SSH client type: native
	I0701 12:21:32.138518  652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0701 12:21:32.138541  652196 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0701 12:21:32.254973  652196 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719836492.215186729
	
	I0701 12:21:32.255001  652196 fix.go:216] guest clock: 1719836492.215186729
	I0701 12:21:32.255007  652196 fix.go:229] Guest: 2024-07-01 12:21:32.215186729 +0000 UTC Remote: 2024-07-01 12:21:32.133836118 +0000 UTC m=+18.172225533 (delta=81.350611ms)
	I0701 12:21:32.255027  652196 fix.go:200] guest clock delta is within tolerance: 81.350611ms
	I0701 12:21:32.255032  652196 start.go:83] releasing machines lock for "ha-735960", held for 18.166751927s
	I0701 12:21:32.255050  652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:21:32.255338  652196 main.go:141] libmachine: (ha-735960) Calling .GetIP
	I0701 12:21:32.258091  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:32.258459  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:32.258481  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:32.258679  652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:21:32.259224  652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:21:32.259383  652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:21:32.259520  652196 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0701 12:21:32.259564  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:21:32.259693  652196 ssh_runner.go:195] Run: cat /version.json
	I0701 12:21:32.259718  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:21:32.262127  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:32.262481  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:32.262518  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:32.262538  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:32.262653  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:21:32.262845  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:32.263031  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:21:32.263054  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:32.263074  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:32.263215  652196 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa Username:docker}
	I0701 12:21:32.263229  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:21:32.263398  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:32.263547  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:21:32.263699  652196 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa Username:docker}
	I0701 12:21:32.343012  652196 ssh_runner.go:195] Run: systemctl --version
	I0701 12:21:32.428409  652196 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0701 12:21:32.433742  652196 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0701 12:21:32.433815  652196 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0701 12:21:32.449052  652196 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0701 12:21:32.449087  652196 start.go:494] detecting cgroup driver to use...
	I0701 12:21:32.449338  652196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 12:21:32.471651  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0701 12:21:32.481832  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0701 12:21:32.491470  652196 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0701 12:21:32.491548  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0701 12:21:32.501229  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0701 12:21:32.511119  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0701 12:21:32.520826  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0701 12:21:32.530559  652196 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0701 12:21:32.542109  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0701 12:21:32.551821  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0701 12:21:32.561403  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0701 12:21:32.571068  652196 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0701 12:21:32.579813  652196 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0701 12:21:32.588595  652196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:21:32.705377  652196 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0701 12:21:32.724169  652196 start.go:494] detecting cgroup driver to use...
	I0701 12:21:32.724285  652196 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0701 12:21:32.739050  652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 12:21:32.753169  652196 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0701 12:21:32.769805  652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 12:21:32.783750  652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 12:21:32.797509  652196 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0701 12:21:32.821510  652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 12:21:32.835901  652196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 12:21:32.854192  652196 ssh_runner.go:195] Run: which cri-dockerd
	I0701 12:21:32.858039  652196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0701 12:21:32.867652  652196 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0701 12:21:32.884216  652196 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0701 12:21:33.001636  652196 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0701 12:21:33.121229  652196 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0701 12:21:33.121419  652196 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0701 12:21:33.138482  652196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:21:33.262395  652196 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0701 12:21:35.714549  652196 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.452099351s)
	I0701 12:21:35.714642  652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0701 12:21:35.727946  652196 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0701 12:21:35.744089  652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0701 12:21:35.757426  652196 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0701 12:21:35.868089  652196 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0701 12:21:35.989857  652196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:21:36.121343  652196 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0701 12:21:36.138520  652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0701 12:21:36.152026  652196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:21:36.271312  652196 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0701 12:21:36.351567  652196 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0701 12:21:36.351668  652196 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0701 12:21:36.357143  652196 start.go:562] Will wait 60s for crictl version
	I0701 12:21:36.357212  652196 ssh_runner.go:195] Run: which crictl
	I0701 12:21:36.361384  652196 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0701 12:21:36.400372  652196 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.1
	RuntimeApiVersion:  v1
	I0701 12:21:36.400446  652196 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0701 12:21:36.427941  652196 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0701 12:21:36.456620  652196 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.1 ...
	I0701 12:21:36.456687  652196 main.go:141] libmachine: (ha-735960) Calling .GetIP
	I0701 12:21:36.459384  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:36.459752  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:36.459781  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:36.459970  652196 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0701 12:21:36.463956  652196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 12:21:36.476676  652196 kubeadm.go:877] updating cluster {Name:ha-735960 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cl
usterName:ha-735960 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.60 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:fa
lse freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0701 12:21:36.476851  652196 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 12:21:36.476914  652196 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0701 12:21:36.493466  652196 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	kindest/kindnetd:v20240513-cd2ac642
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0701 12:21:36.493530  652196 docker.go:615] Images already preloaded, skipping extraction
	I0701 12:21:36.493620  652196 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0701 12:21:36.510908  652196 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	kindest/kindnetd:v20240513-cd2ac642
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0701 12:21:36.510939  652196 cache_images.go:84] Images are preloaded, skipping loading
	I0701 12:21:36.510952  652196 kubeadm.go:928] updating node { 192.168.39.16 8443 v1.30.2 docker true true} ...
	I0701 12:21:36.511079  652196 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-735960 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.16
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-735960 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0701 12:21:36.511139  652196 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0701 12:21:36.536408  652196 cni.go:84] Creating CNI manager for ""
	I0701 12:21:36.536430  652196 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0701 12:21:36.536441  652196 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0701 12:21:36.536470  652196 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.16 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-735960 NodeName:ha-735960 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.16"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.16 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0701 12:21:36.536633  652196 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.16
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-735960"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.16
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.16"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0701 12:21:36.536656  652196 kube-vip.go:115] generating kube-vip config ...
	I0701 12:21:36.536698  652196 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0701 12:21:36.551906  652196 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0701 12:21:36.552024  652196 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0701 12:21:36.552078  652196 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0701 12:21:36.561989  652196 binaries.go:44] Found k8s binaries, skipping transfer
	I0701 12:21:36.562059  652196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0701 12:21:36.571281  652196 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0701 12:21:36.587480  652196 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0701 12:21:36.603596  652196 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0701 12:21:36.621063  652196 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0701 12:21:36.637192  652196 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0701 12:21:36.640909  652196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 12:21:36.652690  652196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:21:36.768142  652196 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0701 12:21:36.786625  652196 certs.go:68] Setting up /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960 for IP: 192.168.39.16
	I0701 12:21:36.786655  652196 certs.go:194] generating shared ca certs ...
	I0701 12:21:36.786676  652196 certs.go:226] acquiring lock for ca certs: {Name:mk34e166bfd069e523b2325e14d1812c523bff53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:21:36.786854  652196 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key
	I0701 12:21:36.786904  652196 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key
	I0701 12:21:36.786915  652196 certs.go:256] generating profile certs ...
	I0701 12:21:36.787017  652196 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/client.key
	I0701 12:21:36.787046  652196 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key.5c21f4af
	I0701 12:21:36.787059  652196 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt.5c21f4af with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.16 192.168.39.86 192.168.39.97 192.168.39.254]
	I0701 12:21:37.059263  652196 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt.5c21f4af ...
	I0701 12:21:37.059305  652196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt.5c21f4af: {Name:mk1be9dc4667506ac6fdcfb1e313edd1292fe7f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:21:37.059483  652196 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key.5c21f4af ...
	I0701 12:21:37.059496  652196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key.5c21f4af: {Name:mkf9220e489bd04f035dab270c790bb3448ca6be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:21:37.059596  652196 certs.go:381] copying /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt.5c21f4af -> /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt
	I0701 12:21:37.059809  652196 certs.go:385] copying /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key.5c21f4af -> /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key
	I0701 12:21:37.059969  652196 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.key
	I0701 12:21:37.059987  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0701 12:21:37.060000  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0701 12:21:37.060014  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0701 12:21:37.060026  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0701 12:21:37.060038  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0701 12:21:37.060054  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0701 12:21:37.060066  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0701 12:21:37.060077  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0701 12:21:37.060165  652196 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem (1338 bytes)
	W0701 12:21:37.060197  652196 certs.go:480] ignoring /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854_empty.pem, impossibly tiny 0 bytes
	I0701 12:21:37.060207  652196 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem (1675 bytes)
	I0701 12:21:37.060228  652196 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem (1078 bytes)
	I0701 12:21:37.060248  652196 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem (1123 bytes)
	I0701 12:21:37.060270  652196 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem (1675 bytes)
	I0701 12:21:37.060305  652196 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem (1708 bytes)
	I0701 12:21:37.060331  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem -> /usr/share/ca-certificates/637854.pem
	I0701 12:21:37.060347  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> /usr/share/ca-certificates/6378542.pem
	I0701 12:21:37.060359  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:21:37.061045  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0701 12:21:37.111708  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0701 12:21:37.168649  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0701 12:21:37.204675  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0701 12:21:37.241167  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0701 12:21:37.265225  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0701 12:21:37.288613  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0701 12:21:37.312645  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0701 12:21:37.337494  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem --> /usr/share/ca-certificates/637854.pem (1338 bytes)
	I0701 12:21:37.361044  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem --> /usr/share/ca-certificates/6378542.pem (1708 bytes)
	I0701 12:21:37.385424  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0701 12:21:37.409054  652196 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0701 12:21:37.426602  652196 ssh_runner.go:195] Run: openssl version
	I0701 12:21:37.432129  652196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0701 12:21:37.442695  652196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:21:37.447331  652196 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  1 12:05 /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:21:37.447415  652196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:21:37.453215  652196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0701 12:21:37.464086  652196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/637854.pem && ln -fs /usr/share/ca-certificates/637854.pem /etc/ssl/certs/637854.pem"
	I0701 12:21:37.474527  652196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/637854.pem
	I0701 12:21:37.479057  652196 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  1 12:11 /usr/share/ca-certificates/637854.pem
	I0701 12:21:37.479123  652196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/637854.pem
	I0701 12:21:37.484641  652196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/637854.pem /etc/ssl/certs/51391683.0"
	I0701 12:21:37.495175  652196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6378542.pem && ln -fs /usr/share/ca-certificates/6378542.pem /etc/ssl/certs/6378542.pem"
	I0701 12:21:37.505961  652196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6378542.pem
	I0701 12:21:37.510286  652196 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  1 12:11 /usr/share/ca-certificates/6378542.pem
	I0701 12:21:37.510365  652196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6378542.pem
	I0701 12:21:37.516124  652196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6378542.pem /etc/ssl/certs/3ec20f2e.0"
	I0701 12:21:37.527154  652196 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0701 12:21:37.532024  652196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0701 12:21:37.538145  652196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0701 12:21:37.544280  652196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0701 12:21:37.550448  652196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0701 12:21:37.556356  652196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0701 12:21:37.562174  652196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0701 12:21:37.568144  652196 kubeadm.go:391] StartCluster: {Name:ha-735960 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-735960 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.60 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false
freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 12:21:37.568362  652196 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0701 12:21:37.586457  652196 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0701 12:21:37.596129  652196 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0701 12:21:37.596158  652196 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0701 12:21:37.596164  652196 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0701 12:21:37.596237  652196 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0701 12:21:37.605715  652196 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0701 12:21:37.606193  652196 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-735960" does not appear in /home/jenkins/minikube-integration/19166-630650/kubeconfig
	I0701 12:21:37.606354  652196 kubeconfig.go:62] /home/jenkins/minikube-integration/19166-630650/kubeconfig needs updating (will repair): [kubeconfig missing "ha-735960" cluster setting kubeconfig missing "ha-735960" context setting]
	I0701 12:21:37.606708  652196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19166-630650/kubeconfig: {Name:mke3ef9d019eff4edd273b00c416fd77ed009242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:21:37.607135  652196 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19166-630650/kubeconfig
	I0701 12:21:37.607365  652196 kapi.go:59] client config for ha-735960: &rest.Config{Host:"https://192.168.39.16:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/client.crt", KeyFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/client.key", CAFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(ni
l)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfbb80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0701 12:21:37.607752  652196 cert_rotation.go:137] Starting client certificate rotation controller
	I0701 12:21:37.608047  652196 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0701 12:21:37.617685  652196 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.16
	I0701 12:21:37.617715  652196 kubeadm.go:591] duration metric: took 21.544408ms to restartPrimaryControlPlane
	I0701 12:21:37.617725  652196 kubeadm.go:393] duration metric: took 49.593354ms to StartCluster
	I0701 12:21:37.617748  652196 settings.go:142] acquiring lock: {Name:mk6f7c85ea77a73ff0ac851454721f2e6e309153 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:21:37.617834  652196 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19166-630650/kubeconfig
	I0701 12:21:37.618535  652196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19166-630650/kubeconfig: {Name:mke3ef9d019eff4edd273b00c416fd77ed009242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:21:37.618754  652196 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:21:37.618777  652196 start.go:240] waiting for startup goroutines ...
	I0701 12:21:37.618792  652196 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0701 12:21:37.619028  652196 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:21:37.621683  652196 out.go:177] * Enabled addons: 
	I0701 12:21:37.622979  652196 addons.go:510] duration metric: took 4.192015ms for enable addons: enabled=[]
	I0701 12:21:37.623011  652196 start.go:245] waiting for cluster config update ...
	I0701 12:21:37.623019  652196 start.go:254] writing updated cluster config ...
	I0701 12:21:37.624600  652196 out.go:177] 
	I0701 12:21:37.626023  652196 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:21:37.626124  652196 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
	I0701 12:21:37.627745  652196 out.go:177] * Starting "ha-735960-m02" control-plane node in "ha-735960" cluster
	I0701 12:21:37.628946  652196 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 12:21:37.628969  652196 cache.go:56] Caching tarball of preloaded images
	I0701 12:21:37.629060  652196 preload.go:173] Found /home/jenkins/minikube-integration/19166-630650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0701 12:21:37.629072  652196 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0701 12:21:37.629161  652196 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
	I0701 12:21:37.629353  652196 start.go:360] acquireMachinesLock for ha-735960-m02: {Name:mk43a6c0c0c15a237623df377ad65b82186c0ad8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:21:37.629411  652196 start.go:364] duration metric: took 31.79µs to acquireMachinesLock for "ha-735960-m02"
	I0701 12:21:37.629427  652196 start.go:96] Skipping create...Using existing machine configuration
	I0701 12:21:37.629440  652196 fix.go:54] fixHost starting: m02
	I0701 12:21:37.629698  652196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:21:37.629747  652196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:21:37.644981  652196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41663
	I0701 12:21:37.645473  652196 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:21:37.645947  652196 main.go:141] libmachine: Using API Version  1
	I0701 12:21:37.645969  652196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:21:37.646284  652196 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:21:37.646523  652196 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:21:37.646646  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetState
	I0701 12:21:37.648195  652196 fix.go:112] recreateIfNeeded on ha-735960-m02: state=Stopped err=<nil>
	I0701 12:21:37.648228  652196 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	W0701 12:21:37.648406  652196 fix.go:138] unexpected machine state, will restart: <nil>
	I0701 12:21:37.650489  652196 out.go:177] * Restarting existing kvm2 VM for "ha-735960-m02" ...
	I0701 12:21:37.651975  652196 main.go:141] libmachine: (ha-735960-m02) Calling .Start
	I0701 12:21:37.652186  652196 main.go:141] libmachine: (ha-735960-m02) Ensuring networks are active...
	I0701 12:21:37.652916  652196 main.go:141] libmachine: (ha-735960-m02) Ensuring network default is active
	I0701 12:21:37.653282  652196 main.go:141] libmachine: (ha-735960-m02) Ensuring network mk-ha-735960 is active
	I0701 12:21:37.653613  652196 main.go:141] libmachine: (ha-735960-m02) Getting domain xml...
	I0701 12:21:37.654254  652196 main.go:141] libmachine: (ha-735960-m02) Creating domain...
	I0701 12:21:38.852369  652196 main.go:141] libmachine: (ha-735960-m02) Waiting to get IP...
	I0701 12:21:38.853358  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:38.853762  652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:21:38.853832  652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:38.853747  652384 retry.go:31] will retry after 295.798088ms: waiting for machine to come up
	I0701 12:21:39.151332  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:39.151886  652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:21:39.151912  652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:39.151845  652384 retry.go:31] will retry after 255.18729ms: waiting for machine to come up
	I0701 12:21:39.408310  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:39.408739  652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:21:39.408792  652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:39.408689  652384 retry.go:31] will retry after 457.740061ms: waiting for machine to come up
	I0701 12:21:39.868295  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:39.868702  652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:21:39.868736  652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:39.868629  652384 retry.go:31] will retry after 548.674851ms: waiting for machine to come up
	I0701 12:21:40.419597  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:40.420069  652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:21:40.420100  652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:40.420009  652384 retry.go:31] will retry after 755.113146ms: waiting for machine to come up
	I0701 12:21:41.176960  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:41.177380  652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:21:41.177429  652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:41.177309  652384 retry.go:31] will retry after 739.288718ms: waiting for machine to come up
	I0701 12:21:41.918305  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:41.918853  652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:21:41.918884  652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:41.918789  652384 retry.go:31] will retry after 722.041404ms: waiting for machine to come up
	I0701 12:21:42.642704  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:42.643188  652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:21:42.643219  652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:42.643113  652384 retry.go:31] will retry after 1.139279839s: waiting for machine to come up
	I0701 12:21:43.784719  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:43.785159  652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:21:43.785193  652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:43.785114  652384 retry.go:31] will retry after 1.276779849s: waiting for machine to come up
	I0701 12:21:45.063522  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:45.064026  652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:21:45.064058  652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:45.063969  652384 retry.go:31] will retry after 2.284492799s: waiting for machine to come up
	I0701 12:21:47.351530  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:47.352076  652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:21:47.352113  652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:47.351988  652384 retry.go:31] will retry after 2.171521184s: waiting for machine to come up
	I0701 12:21:49.526162  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:49.526566  652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:21:49.526590  652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:49.526523  652384 retry.go:31] will retry after 3.533181759s: waiting for machine to come up
	I0701 12:21:53.061482  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.062025  652196 main.go:141] libmachine: (ha-735960-m02) Found IP for machine: 192.168.39.86
	I0701 12:21:53.062048  652196 main.go:141] libmachine: (ha-735960-m02) Reserving static IP address...
	I0701 12:21:53.062060  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has current primary IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.062473  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "ha-735960-m02", mac: "52:54:00:0b:2f:ce", ip: "192.168.39.86"} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:53.062504  652196 main.go:141] libmachine: (ha-735960-m02) DBG | skip adding static IP to network mk-ha-735960 - found existing host DHCP lease matching {name: "ha-735960-m02", mac: "52:54:00:0b:2f:ce", ip: "192.168.39.86"}
	I0701 12:21:53.062534  652196 main.go:141] libmachine: (ha-735960-m02) Reserved static IP address: 192.168.39.86
	I0701 12:21:53.062554  652196 main.go:141] libmachine: (ha-735960-m02) Waiting for SSH to be available...
	I0701 12:21:53.062566  652196 main.go:141] libmachine: (ha-735960-m02) DBG | Getting to WaitForSSH function...
	I0701 12:21:53.064461  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.064796  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:53.064828  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.064893  652196 main.go:141] libmachine: (ha-735960-m02) DBG | Using SSH client type: external
	I0701 12:21:53.064938  652196 main.go:141] libmachine: (ha-735960-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa (-rw-------)
	I0701 12:21:53.064965  652196 main.go:141] libmachine: (ha-735960-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.86 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0701 12:21:53.064981  652196 main.go:141] libmachine: (ha-735960-m02) DBG | About to run SSH command:
	I0701 12:21:53.065000  652196 main.go:141] libmachine: (ha-735960-m02) DBG | exit 0
	I0701 12:21:53.190266  652196 main.go:141] libmachine: (ha-735960-m02) DBG | SSH cmd err, output: <nil>: 
	I0701 12:21:53.190636  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetConfigRaw
	I0701 12:21:53.191272  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetIP
	I0701 12:21:53.193658  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.193994  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:53.194027  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.194274  652196 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
	I0701 12:21:53.194544  652196 machine.go:94] provisionDockerMachine start ...
	I0701 12:21:53.194562  652196 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:21:53.194814  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:21:53.196894  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.197262  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:53.197291  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.197414  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:21:53.197654  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:53.197829  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:53.198021  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:21:53.198185  652196 main.go:141] libmachine: Using SSH client type: native
	I0701 12:21:53.198432  652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0701 12:21:53.198448  652196 main.go:141] libmachine: About to run SSH command:
	hostname
	I0701 12:21:53.306480  652196 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0701 12:21:53.306526  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetMachineName
	I0701 12:21:53.306839  652196 buildroot.go:166] provisioning hostname "ha-735960-m02"
	I0701 12:21:53.306870  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetMachineName
	I0701 12:21:53.307063  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:21:53.309645  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.310086  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:53.310116  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.310307  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:21:53.310514  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:53.310689  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:53.310820  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:21:53.310997  652196 main.go:141] libmachine: Using SSH client type: native
	I0701 12:21:53.311210  652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0701 12:21:53.311225  652196 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-735960-m02 && echo "ha-735960-m02" | sudo tee /etc/hostname
	I0701 12:21:53.434956  652196 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-735960-m02
	
	I0701 12:21:53.434992  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:21:53.437612  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.438016  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:53.438040  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.438190  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:21:53.438418  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:53.438601  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:53.438768  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:21:53.438926  652196 main.go:141] libmachine: Using SSH client type: native
	I0701 12:21:53.439106  652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0701 12:21:53.439128  652196 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-735960-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-735960-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-735960-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0701 12:21:53.559115  652196 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0701 12:21:53.559146  652196 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19166-630650/.minikube CaCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19166-630650/.minikube}
	I0701 12:21:53.559163  652196 buildroot.go:174] setting up certificates
	I0701 12:21:53.559174  652196 provision.go:84] configureAuth start
	I0701 12:21:53.559186  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetMachineName
	I0701 12:21:53.559514  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetIP
	I0701 12:21:53.562119  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.562516  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:53.562550  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.562753  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:21:53.564741  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.565063  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:53.565082  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.565233  652196 provision.go:143] copyHostCerts
	I0701 12:21:53.565266  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem
	I0701 12:21:53.565309  652196 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem, removing ...
	I0701 12:21:53.565318  652196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem
	I0701 12:21:53.565379  652196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem (1078 bytes)
	I0701 12:21:53.565450  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem
	I0701 12:21:53.565468  652196 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem, removing ...
	I0701 12:21:53.565474  652196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem
	I0701 12:21:53.565492  652196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem (1123 bytes)
	I0701 12:21:53.565533  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem
	I0701 12:21:53.565549  652196 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem, removing ...
	I0701 12:21:53.565555  652196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem
	I0701 12:21:53.565570  652196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem (1675 bytes)
	I0701 12:21:53.565618  652196 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem org=jenkins.ha-735960-m02 san=[127.0.0.1 192.168.39.86 ha-735960-m02 localhost minikube]
	I0701 12:21:53.749696  652196 provision.go:177] copyRemoteCerts
	I0701 12:21:53.749755  652196 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0701 12:21:53.749780  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:21:53.752460  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.752780  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:53.752813  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.752952  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:21:53.753159  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:53.753385  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:21:53.753547  652196 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa Username:docker}
	I0701 12:21:53.835990  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0701 12:21:53.836060  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0701 12:21:53.858665  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0701 12:21:53.858753  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0701 12:21:53.880281  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0701 12:21:53.880367  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0701 12:21:53.902677  652196 provision.go:87] duration metric: took 343.48703ms to configureAuth
	I0701 12:21:53.902709  652196 buildroot.go:189] setting minikube options for container-runtime
	I0701 12:21:53.903020  652196 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:21:53.903053  652196 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:21:53.903351  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:21:53.905929  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.906189  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:53.906216  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.906438  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:21:53.906667  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:53.906826  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:53.906966  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:21:53.907119  652196 main.go:141] libmachine: Using SSH client type: native
	I0701 12:21:53.907282  652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0701 12:21:53.907294  652196 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0701 12:21:54.019474  652196 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0701 12:21:54.019501  652196 buildroot.go:70] root file system type: tmpfs
	I0701 12:21:54.019656  652196 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0701 12:21:54.019681  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:21:54.022816  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:54.023184  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:54.023208  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:54.023371  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:21:54.023579  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:54.023787  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:54.023946  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:21:54.024146  652196 main.go:141] libmachine: Using SSH client type: native
	I0701 12:21:54.024319  652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0701 12:21:54.024384  652196 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.39.16"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0701 12:21:54.147740  652196 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.39.16
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0701 12:21:54.147778  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:21:54.150547  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:54.151173  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:54.151208  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:54.151345  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:21:54.151561  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:54.151771  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:54.151918  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:21:54.152095  652196 main.go:141] libmachine: Using SSH client type: native
	I0701 12:21:54.152266  652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0701 12:21:54.152281  652196 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0701 12:21:56.028628  652196 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0701 12:21:56.028682  652196 machine.go:97] duration metric: took 2.834118436s to provisionDockerMachine
	I0701 12:21:56.028701  652196 start.go:293] postStartSetup for "ha-735960-m02" (driver="kvm2")
	I0701 12:21:56.028716  652196 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0701 12:21:56.028738  652196 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:21:56.029099  652196 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0701 12:21:56.029132  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:21:56.031882  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:56.032264  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:56.032289  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:56.032433  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:21:56.032608  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:56.032817  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:21:56.032971  652196 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa Username:docker}
	I0701 12:21:56.117309  652196 ssh_runner.go:195] Run: cat /etc/os-release
	I0701 12:21:56.121231  652196 info.go:137] Remote host: Buildroot 2023.02.9
	I0701 12:21:56.121263  652196 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-630650/.minikube/addons for local assets ...
	I0701 12:21:56.121324  652196 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-630650/.minikube/files for local assets ...
	I0701 12:21:56.121391  652196 filesync.go:149] local asset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> 6378542.pem in /etc/ssl/certs
	I0701 12:21:56.121402  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> /etc/ssl/certs/6378542.pem
	I0701 12:21:56.121478  652196 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0701 12:21:56.130302  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem --> /etc/ssl/certs/6378542.pem (1708 bytes)
	I0701 12:21:56.152776  652196 start.go:296] duration metric: took 124.058691ms for postStartSetup
	I0701 12:21:56.152821  652196 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:21:56.153142  652196 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0701 12:21:56.153170  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:21:56.155689  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:56.156094  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:56.156120  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:56.156332  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:21:56.156555  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:56.156727  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:21:56.156917  652196 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa Username:docker}
	I0701 12:21:56.240391  652196 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0701 12:21:56.240454  652196 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0701 12:21:56.280843  652196 fix.go:56] duration metric: took 18.651393475s for fixHost
	I0701 12:21:56.280895  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:21:56.283268  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:56.283590  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:56.283617  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:56.283860  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:21:56.284107  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:56.284307  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:56.284501  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:21:56.284686  652196 main.go:141] libmachine: Using SSH client type: native
	I0701 12:21:56.284888  652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0701 12:21:56.284903  652196 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0701 12:21:56.398873  652196 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719836516.359963406
	
	I0701 12:21:56.398893  652196 fix.go:216] guest clock: 1719836516.359963406
	I0701 12:21:56.398901  652196 fix.go:229] Guest: 2024-07-01 12:21:56.359963406 +0000 UTC Remote: 2024-07-01 12:21:56.280872467 +0000 UTC m=+42.319261894 (delta=79.090939ms)
	I0701 12:21:56.398919  652196 fix.go:200] guest clock delta is within tolerance: 79.090939ms
	I0701 12:21:56.398924  652196 start.go:83] releasing machines lock for "ha-735960-m02", held for 18.769503298s
	I0701 12:21:56.398940  652196 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:21:56.399198  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetIP
	I0701 12:21:56.401982  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:56.402404  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:56.402436  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:56.404680  652196 out.go:177] * Found network options:
	I0701 12:21:56.406167  652196 out.go:177]   - NO_PROXY=192.168.39.16
	W0701 12:21:56.407620  652196 proxy.go:119] fail to check proxy env: Error ip not in block
	I0701 12:21:56.407664  652196 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:21:56.408285  652196 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:21:56.408498  652196 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:21:56.408606  652196 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0701 12:21:56.408647  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	W0701 12:21:56.408741  652196 proxy.go:119] fail to check proxy env: Error ip not in block
	I0701 12:21:56.408826  652196 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0701 12:21:56.408849  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:21:56.411170  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:56.411559  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:56.411598  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:56.411651  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:56.411933  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:21:56.412130  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:56.412221  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:56.412247  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:56.412295  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:21:56.412519  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:21:56.412508  652196 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa Username:docker}
	I0701 12:21:56.412720  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:56.412871  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:21:56.412987  652196 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa Username:docker}
	W0701 12:21:56.492511  652196 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0701 12:21:56.492595  652196 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0701 12:21:56.515270  652196 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0701 12:21:56.515305  652196 start.go:494] detecting cgroup driver to use...
	I0701 12:21:56.515419  652196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 12:21:56.549004  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0701 12:21:56.560711  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0701 12:21:56.578763  652196 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0701 12:21:56.578832  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0701 12:21:56.589742  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0701 12:21:56.606645  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0701 12:21:56.620036  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0701 12:21:56.632033  652196 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0701 12:21:56.642458  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0701 12:21:56.653078  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0701 12:21:56.663035  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0701 12:21:56.673203  652196 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0701 12:21:56.682348  652196 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0701 12:21:56.691388  652196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:21:56.798709  652196 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0701 12:21:56.821386  652196 start.go:494] detecting cgroup driver to use...
	I0701 12:21:56.821493  652196 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0701 12:21:56.841303  652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 12:21:56.857934  652196 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0701 12:21:56.877318  652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 12:21:56.889777  652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 12:21:56.901844  652196 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0701 12:21:56.927595  652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 12:21:56.940849  652196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 12:21:56.958116  652196 ssh_runner.go:195] Run: which cri-dockerd
	I0701 12:21:56.961664  652196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0701 12:21:56.969985  652196 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0701 12:21:56.985048  652196 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0701 12:21:57.096072  652196 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0701 12:21:57.211289  652196 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0701 12:21:57.211354  652196 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0701 12:21:57.227069  652196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:21:57.341292  652196 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0701 12:22:58.423195  652196 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.08185799s)
	I0701 12:22:58.423268  652196 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0701 12:22:58.444321  652196 out.go:177] 
	W0701 12:22:58.445678  652196 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 01 12:21:54 ha-735960-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 01 12:21:54 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:54.524329635Z" level=info msg="Starting up"
	Jul 01 12:21:54 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:54.525054987Z" level=info msg="containerd not running, starting managed containerd"
	Jul 01 12:21:54 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:54.525787354Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=513
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.553695593Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.572290393Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.572432449Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.572518940Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.572558429Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.572981597Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.573093539Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.573355911Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.573425452Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.573469593Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.573505057Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.573782642Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.574848351Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.576951334Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.577031827Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.577253828Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.577304329Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.577551634Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.577624370Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.577665230Z" level=info msg="metadata content store policy set" policy=shared
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.580979416Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.581128476Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.581284824Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.581371031Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.581432559Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.581524784Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.581996275Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582118070Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582162131Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582245548Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582319648Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582368655Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582407448Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582445279Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582484550Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582521928Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582558472Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582601035Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582656126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582693985Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582741537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582779033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582815513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582853076Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582892671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582938669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582980248Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583032987Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583083364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583122445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583161506Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583262727Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583333396Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583373579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583414811Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583520612Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583751718Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583800626Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583838317Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583874340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583912430Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583991424Z" level=info msg="NRI interface is disabled by configuration."
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.584364167Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.584467963Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.584654486Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.584785754Z" level=info msg="containerd successfully booted in 0.032655s"
	Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.555699119Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.620790434Z" level=info msg="Loading containers: start."
	Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.813021303Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.888534738Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.940299653Z" level=info msg="Loading containers: done."
	Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.956534314Z" level=info msg="Docker daemon" commit=ff1e2c0 containerd-snapshotter=false storage-driver=overlay2 version=27.0.1
	Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.956851438Z" level=info msg="Daemon has completed initialization"
	Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.988054435Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.988129188Z" level=info msg="API listen on [::]:2376"
	Jul 01 12:21:55 ha-735960-m02 systemd[1]: Started Docker Application Container Engine.
	Jul 01 12:21:57 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:57.316115209Z" level=info msg="Processing signal 'terminated'"
	Jul 01 12:21:57 ha-735960-m02 systemd[1]: Stopping Docker Application Container Engine...
	Jul 01 12:21:57 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:57.317321834Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 01 12:21:57 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:57.317386191Z" level=info msg="Daemon shutdown complete"
	Jul 01 12:21:57 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:57.317447382Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 01 12:21:57 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:57.317464543Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 01 12:21:58 ha-735960-m02 systemd[1]: docker.service: Deactivated successfully.
	Jul 01 12:21:58 ha-735960-m02 systemd[1]: Stopped Docker Application Container Engine.
	Jul 01 12:21:58 ha-735960-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 01 12:21:58 ha-735960-m02 dockerd[1188]: time="2024-07-01T12:21:58.364754006Z" level=info msg="Starting up"
	Jul 01 12:22:58 ha-735960-m02 dockerd[1188]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 01 12:22:58 ha-735960-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 01 12:22:58 ha-735960-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 01 12:22:58 ha-735960-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0701 12:22:58.445741  652196 out.go:239] * 
	W0701 12:22:58.447325  652196 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 12:22:58.449434  652196 out.go:177] 
	
	
	==> Docker <==
	Jul 01 12:21:44 ha-735960 dockerd[1190]: time="2024-07-01T12:21:44.208507474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 12:22:05 ha-735960 dockerd[1184]: time="2024-07-01T12:22:05.425890009Z" level=info msg="ignoring event" container=d97b6df80577316a9cf70b2af0f8d52bb2bd7071ff932a8f1f03df9497724786 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 01 12:22:05 ha-735960 dockerd[1190]: time="2024-07-01T12:22:05.426406022Z" level=info msg="shim disconnected" id=d97b6df80577316a9cf70b2af0f8d52bb2bd7071ff932a8f1f03df9497724786 namespace=moby
	Jul 01 12:22:05 ha-735960 dockerd[1190]: time="2024-07-01T12:22:05.427162251Z" level=warning msg="cleaning up after shim disconnected" id=d97b6df80577316a9cf70b2af0f8d52bb2bd7071ff932a8f1f03df9497724786 namespace=moby
	Jul 01 12:22:05 ha-735960 dockerd[1190]: time="2024-07-01T12:22:05.427275716Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 01 12:22:06 ha-735960 dockerd[1190]: time="2024-07-01T12:22:06.439101176Z" level=info msg="shim disconnected" id=ad4259a9c8ee03ff4c6910c68c5c866481fede150d57267cdc957e46aca4ef80 namespace=moby
	Jul 01 12:22:06 ha-735960 dockerd[1184]: time="2024-07-01T12:22:06.441768147Z" level=info msg="ignoring event" container=ad4259a9c8ee03ff4c6910c68c5c866481fede150d57267cdc957e46aca4ef80 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 01 12:22:06 ha-735960 dockerd[1190]: time="2024-07-01T12:22:06.442054407Z" level=warning msg="cleaning up after shim disconnected" id=ad4259a9c8ee03ff4c6910c68c5c866481fede150d57267cdc957e46aca4ef80 namespace=moby
	Jul 01 12:22:06 ha-735960 dockerd[1190]: time="2024-07-01T12:22:06.442214156Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 01 12:22:24 ha-735960 dockerd[1190]: time="2024-07-01T12:22:24.071877635Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 01 12:22:24 ha-735960 dockerd[1190]: time="2024-07-01T12:22:24.072398316Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 01 12:22:24 ha-735960 dockerd[1190]: time="2024-07-01T12:22:24.072506177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 12:22:24 ha-735960 dockerd[1190]: time="2024-07-01T12:22:24.072761669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 12:22:27 ha-735960 dockerd[1190]: time="2024-07-01T12:22:27.091757274Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 01 12:22:27 ha-735960 dockerd[1190]: time="2024-07-01T12:22:27.091819785Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 01 12:22:27 ha-735960 dockerd[1190]: time="2024-07-01T12:22:27.091834055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 12:22:27 ha-735960 dockerd[1190]: time="2024-07-01T12:22:27.092367194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 12:22:47 ha-735960 dockerd[1184]: time="2024-07-01T12:22:47.577930706Z" level=info msg="ignoring event" container=e546c39248bc8ab525701bacc0a354650e0da853981a4624abd25fdba1c1ca30 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 01 12:22:47 ha-735960 dockerd[1190]: time="2024-07-01T12:22:47.578670317Z" level=info msg="shim disconnected" id=e546c39248bc8ab525701bacc0a354650e0da853981a4624abd25fdba1c1ca30 namespace=moby
	Jul 01 12:22:47 ha-735960 dockerd[1190]: time="2024-07-01T12:22:47.578983718Z" level=warning msg="cleaning up after shim disconnected" id=e546c39248bc8ab525701bacc0a354650e0da853981a4624abd25fdba1c1ca30 namespace=moby
	Jul 01 12:22:47 ha-735960 dockerd[1190]: time="2024-07-01T12:22:47.579585559Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 01 12:22:48 ha-735960 dockerd[1184]: time="2024-07-01T12:22:48.582829662Z" level=info msg="ignoring event" container=829fe19c75ce30a13d3a4e33a2e2a760477739dbe6f611f1b4e60d69d0444f4d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 01 12:22:48 ha-735960 dockerd[1190]: time="2024-07-01T12:22:48.583282892Z" level=info msg="shim disconnected" id=829fe19c75ce30a13d3a4e33a2e2a760477739dbe6f611f1b4e60d69d0444f4d namespace=moby
	Jul 01 12:22:48 ha-735960 dockerd[1190]: time="2024-07-01T12:22:48.584157023Z" level=warning msg="cleaning up after shim disconnected" id=829fe19c75ce30a13d3a4e33a2e2a760477739dbe6f611f1b4e60d69d0444f4d namespace=moby
	Jul 01 12:22:48 ha-735960 dockerd[1190]: time="2024-07-01T12:22:48.584285564Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	e546c39248bc8       56ce0fd9fb532                                                                                         34 seconds ago       Exited              kube-apiserver            2                   16dae930b4edb       kube-apiserver-ha-735960
	829fe19c75ce3       e874818b3caac                                                                                         37 seconds ago       Exited              kube-controller-manager   2                   5e2a9b91be69c       kube-controller-manager-ha-735960
	cecb3dd12e16e       38af8ddebf499                                                                                         About a minute ago   Running             kube-vip                  0                   8d1562fb4b8c3       kube-vip-ha-735960
	6a200a6b49020       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      1                   5b1097d48d724       etcd-ha-735960
	2d71437c5f06d       7820c83aa1394                                                                                         About a minute ago   Running             kube-scheduler            1                   fa7dea6a1b8bd       kube-scheduler-ha-735960
	14112a4d8f2cb       38af8ddebf499                                                                                         2 minutes ago        Exited              kube-vip                  1                   46ab74fdab7e2       kube-vip-ha-735960
	1ef6d9da6a9c5       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   4 minutes ago        Exited              busybox                   0                   1f5ccc7b0e655       busybox-fc5497c4f-pjfcw
	a9c30cd4b3455       cbb01a7bd410d                                                                                         6 minutes ago        Exited              coredns                   0                   7b4b4f7ec4b63       coredns-7db6d8ff4d-nk4lf
	769b0b8751350       cbb01a7bd410d                                                                                         6 minutes ago        Exited              coredns                   0                   7a349370d4f88       coredns-7db6d8ff4d-p4rtz
	97d58c94f3fdc       6e38f40d628db                                                                                         6 minutes ago        Exited              storage-provisioner       0                   9226633ad878a       storage-provisioner
	f472aef5302fd       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              6 minutes ago        Exited              kindnet-cni               0                   ab9c74a502295       kindnet-7f6hm
	6116abe6039dc       53c535741fb44                                                                                         6 minutes ago        Exited              kube-proxy                0                   da69191059798       kube-proxy-lphzn
	cb63d54411807       7820c83aa1394                                                                                         7 minutes ago        Exited              kube-scheduler            0                   19b6b0e6ed64e       kube-scheduler-ha-735960
	24c8926d2b31d       3861cfcd7c04c                                                                                         7 minutes ago        Exited              etcd                      0                   d3b914e19ca22       etcd-ha-735960
	
	
	==> coredns [769b0b875135] <==
	[INFO] 10.244.1.2:44221 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000082797s
	[INFO] 10.244.2.2:33797 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157729s
	[INFO] 10.244.2.2:52590 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004055351s
	[INFO] 10.244.2.2:46983 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003253494s
	[INFO] 10.244.2.2:56187 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000205215s
	[INFO] 10.244.2.2:41086 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000158307s
	[INFO] 10.244.0.4:47783 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000097077s
	[INFO] 10.244.0.4:50743 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001523s
	[INFO] 10.244.0.4:37141 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000138763s
	[INFO] 10.244.1.2:32981 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000132906s
	[INFO] 10.244.1.2:36762 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001646552s
	[INFO] 10.244.1.2:33583 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000072434s
	[INFO] 10.244.2.2:37027 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000156518s
	[INFO] 10.244.2.2:58435 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000104504s
	[INFO] 10.244.2.2:36107 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090251s
	[INFO] 10.244.0.4:44792 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000227164s
	[INFO] 10.244.0.4:56557 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000140925s
	[INFO] 10.244.1.2:38284 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000232717s
	[INFO] 10.244.2.2:37664 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135198s
	[INFO] 10.244.2.2:60876 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00032392s
	[INFO] 10.244.1.2:37461 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133264s
	[INFO] 10.244.1.2:45182 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000117372s
	[INFO] 10.244.1.2:37156 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000240093s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a9c30cd4b345] <==
	[INFO] 10.244.0.4:57095 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002251804s
	[INFO] 10.244.0.4:42381 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000081215s
	[INFO] 10.244.0.4:53499 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00124929s
	[INFO] 10.244.0.4:41287 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000174281s
	[INFO] 10.244.0.4:36433 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000142863s
	[INFO] 10.244.1.2:47688 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130034s
	[INFO] 10.244.1.2:40562 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00183587s
	[INFO] 10.244.1.2:35137 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000771s
	[INFO] 10.244.1.2:37798 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000184282s
	[INFO] 10.244.1.2:43876 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008807s
	[INFO] 10.244.2.2:35039 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119303s
	[INFO] 10.244.0.4:53229 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000090292s
	[INFO] 10.244.0.4:42097 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00011308s
	[INFO] 10.244.1.2:42114 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130767s
	[INFO] 10.244.1.2:56638 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000110707s
	[INFO] 10.244.1.2:55805 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000093484s
	[INFO] 10.244.2.2:51675 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000145117s
	[INFO] 10.244.2.2:56838 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000136843s
	[INFO] 10.244.0.4:60951 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000162889s
	[INFO] 10.244.0.4:34776 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000112367s
	[INFO] 10.244.0.4:45397 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000073771s
	[INFO] 10.244.0.4:52372 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000058127s
	[INFO] 10.244.1.2:41033 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000131962s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0701 12:23:01.267097    2767 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
	E0701 12:23:01.267595    2767 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
	E0701 12:23:01.269175    2767 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
	E0701 12:23:01.269477    2767 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
	E0701 12:23:01.270990    2767 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul 1 12:21] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050877] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036108] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.421397] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.628587] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.463440] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +4.322115] systemd-fstab-generator[476]: Ignoring "noauto" option for root device
	[  +0.057798] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060958] systemd-fstab-generator[488]: Ignoring "noauto" option for root device
	[  +2.352578] systemd-fstab-generator[1113]: Ignoring "noauto" option for root device
	[  +0.297044] systemd-fstab-generator[1150]: Ignoring "noauto" option for root device
	[  +0.121689] systemd-fstab-generator[1162]: Ignoring "noauto" option for root device
	[  +0.127513] systemd-fstab-generator[1176]: Ignoring "noauto" option for root device
	[  +2.293985] kauditd_printk_skb: 195 callbacks suppressed
	[  +0.325101] systemd-fstab-generator[1411]: Ignoring "noauto" option for root device
	[  +0.108851] systemd-fstab-generator[1423]: Ignoring "noauto" option for root device
	[  +0.138237] systemd-fstab-generator[1435]: Ignoring "noauto" option for root device
	[  +0.156114] systemd-fstab-generator[1450]: Ignoring "noauto" option for root device
	[  +0.494872] systemd-fstab-generator[1603]: Ignoring "noauto" option for root device
	[  +6.977462] kauditd_printk_skb: 176 callbacks suppressed
	[ +11.291301] kauditd_printk_skb: 40 callbacks suppressed
	
	
	==> etcd [24c8926d2b31] <==
	{"level":"info","ts":"2024-07-01T12:21:01.297933Z","caller":"traceutil/trace.go:171","msg":"trace[249123960] range","detail":"{range_begin:/registry/leases/; range_end:/registry/leases0; }","duration":"4.106112275s","start":"2024-07-01T12:20:57.191803Z","end":"2024-07-01T12:21:01.297915Z","steps":["trace[249123960] 'agreement among raft nodes before linearized reading'  (duration: 4.10601913s)"],"step_count":1}
	{"level":"warn","ts":"2024-07-01T12:21:01.298006Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-01T12:20:57.191796Z","time spent":"4.106166982s","remote":"127.0.0.1:56240","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":0,"response size":0,"request content":"key:\"/registry/leases/\" range_end:\"/registry/leases0\" count_only:true "}
	2024/07/01 12:21:01 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/01 12:21:01 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/01 12:21:01 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-01T12:21:01.381902Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.16:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-01T12:21:01.38194Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.16:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-01T12:21:01.38203Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"b6c76b3131c1024","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-01T12:21:01.382382Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"c77bbbee62c21090"}
	{"level":"info","ts":"2024-07-01T12:21:01.382398Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"c77bbbee62c21090"}
	{"level":"info","ts":"2024-07-01T12:21:01.38247Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"c77bbbee62c21090"}
	{"level":"info","ts":"2024-07-01T12:21:01.382583Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b6c76b3131c1024","remote-peer-id":"c77bbbee62c21090"}
	{"level":"info","ts":"2024-07-01T12:21:01.382685Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b6c76b3131c1024","remote-peer-id":"c77bbbee62c21090"}
	{"level":"info","ts":"2024-07-01T12:21:01.382809Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b6c76b3131c1024","remote-peer-id":"c77bbbee62c21090"}
	{"level":"info","ts":"2024-07-01T12:21:01.382826Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"c77bbbee62c21090"}
	{"level":"info","ts":"2024-07-01T12:21:01.382832Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"77557cf66c24e9ff"}
	{"level":"info","ts":"2024-07-01T12:21:01.382882Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"77557cf66c24e9ff"}
	{"level":"info","ts":"2024-07-01T12:21:01.3829Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"77557cf66c24e9ff"}
	{"level":"info","ts":"2024-07-01T12:21:01.385706Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b6c76b3131c1024","remote-peer-id":"77557cf66c24e9ff"}
	{"level":"info","ts":"2024-07-01T12:21:01.385804Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b6c76b3131c1024","remote-peer-id":"77557cf66c24e9ff"}
	{"level":"info","ts":"2024-07-01T12:21:01.385838Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b6c76b3131c1024","remote-peer-id":"77557cf66c24e9ff"}
	{"level":"info","ts":"2024-07-01T12:21:01.385849Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"77557cf66c24e9ff"}
	{"level":"info","ts":"2024-07-01T12:21:01.406065Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.16:2380"}
	{"level":"info","ts":"2024-07-01T12:21:01.406193Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.16:2380"}
	{"level":"info","ts":"2024-07-01T12:21:01.406214Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-735960","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.16:2380"],"advertise-client-urls":["https://192.168.39.16:2379"]}
	
	
	==> etcd [6a200a6b4902] <==
	{"level":"warn","ts":"2024-07-01T12:22:54.827561Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"77557cf66c24e9ff","rtt":"0s","error":"dial tcp 192.168.39.97:2380: i/o timeout"}
	{"level":"info","ts":"2024-07-01T12:22:56.088711Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-01T12:22:56.088779Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-01T12:22:56.088792Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 received MsgPreVoteResp from b6c76b3131c1024 at term 2"}
	{"level":"info","ts":"2024-07-01T12:22:56.088806Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 [logterm: 2, index: 1880] sent MsgPreVote request to 77557cf66c24e9ff at term 2"}
	{"level":"info","ts":"2024-07-01T12:22:56.088813Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 [logterm: 2, index: 1880] sent MsgPreVote request to c77bbbee62c21090 at term 2"}
	{"level":"info","ts":"2024-07-01T12:22:57.488845Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-01T12:22:57.488894Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-01T12:22:57.488907Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 received MsgPreVoteResp from b6c76b3131c1024 at term 2"}
	{"level":"info","ts":"2024-07-01T12:22:57.488922Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 [logterm: 2, index: 1880] sent MsgPreVote request to 77557cf66c24e9ff at term 2"}
	{"level":"info","ts":"2024-07-01T12:22:57.488929Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 [logterm: 2, index: 1880] sent MsgPreVote request to c77bbbee62c21090 at term 2"}
	{"level":"info","ts":"2024-07-01T12:22:58.888088Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-01T12:22:58.888193Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-01T12:22:58.888234Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 received MsgPreVoteResp from b6c76b3131c1024 at term 2"}
	{"level":"info","ts":"2024-07-01T12:22:58.888281Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 [logterm: 2, index: 1880] sent MsgPreVote request to 77557cf66c24e9ff at term 2"}
	{"level":"info","ts":"2024-07-01T12:22:58.888295Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 [logterm: 2, index: 1880] sent MsgPreVote request to c77bbbee62c21090 at term 2"}
	{"level":"warn","ts":"2024-07-01T12:22:59.811118Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c77bbbee62c21090","rtt":"0s","error":"dial tcp 192.168.39.86:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-01T12:22:59.811186Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"c77bbbee62c21090","rtt":"0s","error":"dial tcp 192.168.39.86:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-01T12:22:59.827782Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"77557cf66c24e9ff","rtt":"0s","error":"dial tcp 192.168.39.97:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-07-01T12:22:59.82782Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"77557cf66c24e9ff","rtt":"0s","error":"dial tcp 192.168.39.97:2380: connect: no route to host"}
	{"level":"info","ts":"2024-07-01T12:23:00.288491Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:00.288559Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:00.288572Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 received MsgPreVoteResp from b6c76b3131c1024 at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:00.288586Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 [logterm: 2, index: 1880] sent MsgPreVote request to 77557cf66c24e9ff at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:00.288593Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 [logterm: 2, index: 1880] sent MsgPreVote request to c77bbbee62c21090 at term 2"}
	
	
	==> kernel <==
	 12:23:01 up 1 min,  0 users,  load average: 0.14, 0.07, 0.02
	Linux ha-735960 5.10.207 #1 SMP Wed Jun 26 19:37:34 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [f472aef5302f] <==
	I0701 12:20:12.428842       1 main.go:250] Node ha-735960-m04 has CIDR [10.244.3.0/24] 
	I0701 12:20:22.443154       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0701 12:20:22.443292       1 main.go:227] handling current node
	I0701 12:20:22.443323       1 main.go:223] Handling node with IPs: map[192.168.39.86:{}]
	I0701 12:20:22.443388       1 main.go:250] Node ha-735960-m02 has CIDR [10.244.1.0/24] 
	I0701 12:20:22.443605       1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
	I0701 12:20:22.443653       1 main.go:250] Node ha-735960-m03 has CIDR [10.244.2.0/24] 
	I0701 12:20:22.443793       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0701 12:20:22.443836       1 main.go:250] Node ha-735960-m04 has CIDR [10.244.3.0/24] 
	I0701 12:20:32.451395       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0701 12:20:32.451431       1 main.go:227] handling current node
	I0701 12:20:32.451481       1 main.go:223] Handling node with IPs: map[192.168.39.86:{}]
	I0701 12:20:32.451486       1 main.go:250] Node ha-735960-m02 has CIDR [10.244.1.0/24] 
	I0701 12:20:32.451947       1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
	I0701 12:20:32.451980       1 main.go:250] Node ha-735960-m03 has CIDR [10.244.2.0/24] 
	I0701 12:20:32.452873       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0701 12:20:32.453015       1 main.go:250] Node ha-735960-m04 has CIDR [10.244.3.0/24] 
	I0701 12:20:42.470169       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0701 12:20:42.470264       1 main.go:227] handling current node
	I0701 12:20:42.470289       1 main.go:223] Handling node with IPs: map[192.168.39.86:{}]
	I0701 12:20:42.470302       1 main.go:250] Node ha-735960-m02 has CIDR [10.244.1.0/24] 
	I0701 12:20:42.470523       1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
	I0701 12:20:42.470616       1 main.go:250] Node ha-735960-m03 has CIDR [10.244.2.0/24] 
	I0701 12:20:42.470868       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0701 12:20:42.470914       1 main.go:250] Node ha-735960-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [e546c39248bc] <==
	I0701 12:22:27.228496       1 options.go:221] external host was not specified, using 192.168.39.16
	I0701 12:22:27.229584       1 server.go:148] Version: v1.30.2
	I0701 12:22:27.229706       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0701 12:22:27.544729       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0701 12:22:27.547846       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0701 12:22:27.551600       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0701 12:22:27.551634       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0701 12:22:27.551982       1 instance.go:299] Using reconciler: lease
	W0701 12:22:47.544372       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0701 12:22:47.544664       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0701 12:22:47.553171       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [829fe19c75ce] <==
	I0701 12:22:24.521097       1 serving.go:380] Generated self-signed cert in-memory
	I0701 12:22:24.837441       1 controllermanager.go:189] "Starting" version="v1.30.2"
	I0701 12:22:24.837478       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0701 12:22:24.839276       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0701 12:22:24.839470       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0701 12:22:24.839988       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0701 12:22:24.840049       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0701 12:22:48.561111       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.16:8443/healthz\": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:57228->192.168.39.16:8443: read: connection reset by peer"
	
	
	==> kube-proxy [6116abe6039d] <==
	I0701 12:16:09.205590       1 server_linux.go:69] "Using iptables proxy"
	I0701 12:16:09.223098       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.16"]
	I0701 12:16:09.284088       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0701 12:16:09.284134       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0701 12:16:09.284152       1 server_linux.go:165] "Using iptables Proxier"
	I0701 12:16:09.286802       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0701 12:16:09.287240       1 server.go:872] "Version info" version="v1.30.2"
	I0701 12:16:09.287274       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0701 12:16:09.288803       1 config.go:192] "Starting service config controller"
	I0701 12:16:09.288830       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0701 12:16:09.289262       1 config.go:101] "Starting endpoint slice config controller"
	I0701 12:16:09.289283       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0701 12:16:09.290101       1 config.go:319] "Starting node config controller"
	I0701 12:16:09.290125       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0701 12:16:09.389941       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0701 12:16:09.390030       1 shared_informer.go:320] Caches are synced for service config
	I0701 12:16:09.390393       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2d71437c5f06] <==
	Trace[1841834859]: ---"Objects listed" error:Get "https://192.168.39.16:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:57242->192.168.39.16:8443: read: connection reset by peer 10642ms (12:22:48.563)
	Trace[1841834859]: [10.642423199s] [10.642423199s] END
	E0701 12:22:48.563438       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.16:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:57242->192.168.39.16:8443: read: connection reset by peer
	W0701 12:22:48.563506       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.16:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:59182->192.168.39.16:8443: read: connection reset by peer
	E0701 12:22:48.563570       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.16:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:59182->192.168.39.16:8443: read: connection reset by peer
	W0701 12:22:48.563641       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.16:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:59186->192.168.39.16:8443: read: connection reset by peer
	E0701 12:22:48.563665       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.16:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:59186->192.168.39.16:8443: read: connection reset by peer
	W0701 12:22:48.563724       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.16:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:59188->192.168.39.16:8443: read: connection reset by peer
	E0701 12:22:48.563747       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.16:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:59188->192.168.39.16:8443: read: connection reset by peer
	W0701 12:22:48.563814       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.16:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:59202->192.168.39.16:8443: read: connection reset by peer
	E0701 12:22:48.563830       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.16:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:59202->192.168.39.16:8443: read: connection reset by peer
	W0701 12:22:48.563886       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.16:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:59238->192.168.39.16:8443: read: connection reset by peer
	E0701 12:22:48.563907       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.16:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:59238->192.168.39.16:8443: read: connection reset by peer
	W0701 12:22:48.563967       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.16:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:59262->192.168.39.16:8443: read: connection reset by peer
	E0701 12:22:48.563982       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.16:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:59262->192.168.39.16:8443: read: connection reset by peer
	W0701 12:22:48.563997       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.16:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:59210->192.168.39.16:8443: read: connection reset by peer
	E0701 12:22:48.564229       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.16:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:59210->192.168.39.16:8443: read: connection reset by peer
	W0701 12:22:48.669137       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.16:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	E0701 12:22:48.669192       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.16:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	W0701 12:22:51.792652       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.16:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	E0701 12:22:51.792757       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.16:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	W0701 12:22:52.248014       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.16:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	E0701 12:22:52.248063       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.16:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	W0701 12:22:55.201032       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.16:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	E0701 12:22:55.201141       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.16:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	
	
	==> kube-scheduler [cb63d5441180] <==
	W0701 12:15:50.916180       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0701 12:15:50.916379       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0701 12:15:51.752711       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0701 12:15:51.752853       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0701 12:15:51.794007       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0701 12:15:51.794055       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0701 12:15:51.931391       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0701 12:15:51.931434       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0701 12:15:51.950120       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0701 12:15:51.950162       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0701 12:15:51.968922       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0701 12:15:51.969125       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0701 12:15:51.985991       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0701 12:15:51.986032       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0701 12:15:52.054298       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0701 12:15:52.054329       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0701 12:15:52.260873       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0701 12:15:52.260979       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0701 12:15:54.206866       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0701 12:19:09.710917       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-xv95g\": pod kube-proxy-xv95g is already assigned to node \"ha-735960-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-xv95g" node="ha-735960-m04"
	E0701 12:19:09.713930       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-xv95g\": pod kube-proxy-xv95g is already assigned to node \"ha-735960-m04\"" pod="kube-system/kube-proxy-xv95g"
	I0701 12:21:01.200143       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0701 12:21:01.200254       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0701 12:21:01.200659       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0701 12:21:01.212693       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 01 12:22:48 ha-735960 kubelet[1610]: I0701 12:22:48.161197    1610 scope.go:117] "RemoveContainer" containerID="e546c39248bc8ab525701bacc0a354650e0da853981a4624abd25fdba1c1ca30"
	Jul 01 12:22:48 ha-735960 kubelet[1610]: E0701 12:22:48.162173    1610 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-735960_kube-system(858bfcad8b1d02b8cdc3dc83c4af060c)\"" pod="kube-system/kube-apiserver-ha-735960" podUID="858bfcad8b1d02b8cdc3dc83c4af060c"
	Jul 01 12:22:49 ha-735960 kubelet[1610]: I0701 12:22:49.180032    1610 scope.go:117] "RemoveContainer" containerID="ad4259a9c8ee03ff4c6910c68c5c866481fede150d57267cdc957e46aca4ef80"
	Jul 01 12:22:49 ha-735960 kubelet[1610]: I0701 12:22:49.181799    1610 scope.go:117] "RemoveContainer" containerID="829fe19c75ce30a13d3a4e33a2e2a760477739dbe6f611f1b4e60d69d0444f4d"
	Jul 01 12:22:49 ha-735960 kubelet[1610]: E0701 12:22:49.182112    1610 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-735960_kube-system(9a545edc3c0d885e2370d3a24ff8ac4b)\"" pod="kube-system/kube-controller-manager-ha-735960" podUID="9a545edc3c0d885e2370d3a24ff8ac4b"
	Jul 01 12:22:50 ha-735960 kubelet[1610]: I0701 12:22:50.089167    1610 scope.go:117] "RemoveContainer" containerID="e546c39248bc8ab525701bacc0a354650e0da853981a4624abd25fdba1c1ca30"
	Jul 01 12:22:50 ha-735960 kubelet[1610]: E0701 12:22:50.089722    1610 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-735960_kube-system(858bfcad8b1d02b8cdc3dc83c4af060c)\"" pod="kube-system/kube-apiserver-ha-735960" podUID="858bfcad8b1d02b8cdc3dc83c4af060c"
	Jul 01 12:22:50 ha-735960 kubelet[1610]: I0701 12:22:50.202365    1610 scope.go:117] "RemoveContainer" containerID="829fe19c75ce30a13d3a4e33a2e2a760477739dbe6f611f1b4e60d69d0444f4d"
	Jul 01 12:22:50 ha-735960 kubelet[1610]: E0701 12:22:50.202700    1610 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-735960_kube-system(9a545edc3c0d885e2370d3a24ff8ac4b)\"" pod="kube-system/kube-controller-manager-ha-735960" podUID="9a545edc3c0d885e2370d3a24ff8ac4b"
	Jul 01 12:22:51 ha-735960 kubelet[1610]: I0701 12:22:51.209935    1610 scope.go:117] "RemoveContainer" containerID="829fe19c75ce30a13d3a4e33a2e2a760477739dbe6f611f1b4e60d69d0444f4d"
	Jul 01 12:22:51 ha-735960 kubelet[1610]: E0701 12:22:51.210647    1610 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-735960_kube-system(9a545edc3c0d885e2370d3a24ff8ac4b)\"" pod="kube-system/kube-controller-manager-ha-735960" podUID="9a545edc3c0d885e2370d3a24ff8ac4b"
	Jul 01 12:22:51 ha-735960 kubelet[1610]: I0701 12:22:51.576067    1610 kubelet_node_status.go:73] "Attempting to register node" node="ha-735960"
	Jul 01 12:22:53 ha-735960 kubelet[1610]: I0701 12:22:53.728933    1610 scope.go:117] "RemoveContainer" containerID="e546c39248bc8ab525701bacc0a354650e0da853981a4624abd25fdba1c1ca30"
	Jul 01 12:22:53 ha-735960 kubelet[1610]: E0701 12:22:53.729329    1610 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-735960_kube-system(858bfcad8b1d02b8cdc3dc83c4af060c)\"" pod="kube-system/kube-apiserver-ha-735960" podUID="858bfcad8b1d02b8cdc3dc83c4af060c"
	Jul 01 12:22:53 ha-735960 kubelet[1610]: E0701 12:22:53.789831    1610 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.254:8443: connect: no route to host" node="ha-735960"
	Jul 01 12:22:53 ha-735960 kubelet[1610]: E0701 12:22:53.790000    1610 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-735960?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host" interval="7s"
	Jul 01 12:22:56 ha-735960 kubelet[1610]: W0701 12:22:56.862031    1610 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	Jul 01 12:22:56 ha-735960 kubelet[1610]: E0701 12:22:56.862122    1610 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	Jul 01 12:22:57 ha-735960 kubelet[1610]: E0701 12:22:57.094040    1610 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-735960\" not found"
	Jul 01 12:22:59 ha-735960 kubelet[1610]: W0701 12:22:59.934973    1610 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	Jul 01 12:22:59 ha-735960 kubelet[1610]: E0701 12:22:59.935046    1610 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	Jul 01 12:22:59 ha-735960 kubelet[1610]: W0701 12:22:59.935096    1610 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-735960&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	Jul 01 12:22:59 ha-735960 kubelet[1610]: E0701 12:22:59.935120    1610 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-735960&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	Jul 01 12:22:59 ha-735960 kubelet[1610]: E0701 12:22:59.935170    1610 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host" event="&Event{ObjectMeta:{ha-735960.17de162e90ad8f5f  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-735960,UID:ha-735960,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ha-735960,},FirstTimestamp:2024-07-01 12:21:36.953708383 +0000 UTC m=+0.183371310,LastTimestamp:2024-07-01 12:21:36.953708383 +0000 UTC m=+0.183371310,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-735960,}"
	Jul 01 12:23:00 ha-735960 kubelet[1610]: I0701 12:23:00.791239    1610 kubelet_node_status.go:73] "Attempting to register node" node="ha-735960"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-735960 -n ha-735960
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-735960 -n ha-735960: exit status 2 (228.697485ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-735960" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (1.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:413: expected profile "ha-735960" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-735960\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-735960\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-735960\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.16\",\"Port\":8443,\"Kub
ernetesVersion\":\"v1.30.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.86\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.97\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.60\",\"Port\":0,\"KubernetesVersion\":\"v1.30.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubev
irt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker
\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-735960 -n ha-735960
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-735960 -n ha-735960: exit status 2 (224.251429ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 logs -n 25
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-735960 ssh -n                                                                 | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n ha-735960-m02 sudo cat                                          | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | /home/docker/cp-test_ha-735960-m03_ha-735960-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-735960 cp ha-735960-m03:/home/docker/cp-test.txt                              | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m04:/home/docker/cp-test_ha-735960-m03_ha-735960-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n                                                                 | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n ha-735960-m04 sudo cat                                          | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | /home/docker/cp-test_ha-735960-m03_ha-735960-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-735960 cp testdata/cp-test.txt                                                | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n                                                                 | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-735960 cp ha-735960-m04:/home/docker/cp-test.txt                              | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2826819896/001/cp-test_ha-735960-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n                                                                 | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-735960 cp ha-735960-m04:/home/docker/cp-test.txt                              | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960:/home/docker/cp-test_ha-735960-m04_ha-735960.txt                       |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n                                                                 | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n ha-735960 sudo cat                                              | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | /home/docker/cp-test_ha-735960-m04_ha-735960.txt                                 |           |         |         |                     |                     |
	| cp      | ha-735960 cp ha-735960-m04:/home/docker/cp-test.txt                              | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m02:/home/docker/cp-test_ha-735960-m04_ha-735960-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n                                                                 | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n ha-735960-m02 sudo cat                                          | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | /home/docker/cp-test_ha-735960-m04_ha-735960-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-735960 cp ha-735960-m04:/home/docker/cp-test.txt                              | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m03:/home/docker/cp-test_ha-735960-m04_ha-735960-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n                                                                 | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n ha-735960-m03 sudo cat                                          | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | /home/docker/cp-test_ha-735960-m04_ha-735960-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-735960 node stop m02 -v=7                                                     | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-735960 node start m02 -v=7                                                    | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:20 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-735960 -v=7                                                           | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:20 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-735960 -v=7                                                                | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:20 UTC | 01 Jul 24 12:21 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-735960 --wait=true -v=7                                                    | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:21 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-735960                                                                | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:22 UTC |                     |
	| node    | ha-735960 node delete m03 -v=7                                                   | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:22 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/01 12:21:13
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0701 12:21:13.996326  652196 out.go:291] Setting OutFile to fd 1 ...
	I0701 12:21:13.996600  652196 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 12:21:13.996610  652196 out.go:304] Setting ErrFile to fd 2...
	I0701 12:21:13.996615  652196 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 12:21:13.996825  652196 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19166-630650/.minikube/bin
	I0701 12:21:13.997417  652196 out.go:298] Setting JSON to false
	I0701 12:21:13.998463  652196 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":7412,"bootTime":1719829062,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0701 12:21:13.998525  652196 start.go:139] virtualization: kvm guest
	I0701 12:21:14.000967  652196 out.go:177] * [ha-735960] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0701 12:21:14.002666  652196 out.go:177]   - MINIKUBE_LOCATION=19166
	I0701 12:21:14.002690  652196 notify.go:220] Checking for updates...
	I0701 12:21:14.005489  652196 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 12:21:14.006983  652196 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19166-630650/kubeconfig
	I0701 12:21:14.008350  652196 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19166-630650/.minikube
	I0701 12:21:14.009593  652196 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0701 12:21:14.011091  652196 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 12:21:14.012857  652196 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:21:14.012999  652196 driver.go:392] Setting default libvirt URI to qemu:///system
	I0701 12:21:14.013468  652196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:21:14.013542  652196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:21:14.028581  652196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35775
	I0701 12:21:14.028967  652196 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:21:14.029528  652196 main.go:141] libmachine: Using API Version  1
	I0701 12:21:14.029551  652196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:21:14.029916  652196 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:21:14.030116  652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:21:14.065038  652196 out.go:177] * Using the kvm2 driver based on existing profile
	I0701 12:21:14.066535  652196 start.go:297] selected driver: kvm2
	I0701 12:21:14.066551  652196 start.go:901] validating driver "kvm2" against &{Name:ha-735960 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.2 ClusterName:ha-735960 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.60 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:fal
se efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 12:21:14.066723  652196 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 12:21:14.067041  652196 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:21:14.067114  652196 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19166-630650/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0701 12:21:14.082191  652196 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0701 12:21:14.082920  652196 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 12:21:14.082959  652196 cni.go:84] Creating CNI manager for ""
	I0701 12:21:14.082966  652196 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0701 12:21:14.083026  652196 start.go:340] cluster config:
	{Name:ha-735960 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-735960 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.60 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false h
elm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 12:21:14.083142  652196 iso.go:125] acquiring lock: {Name:mk5c70910f61bc270c83609c48670eaf9d7e0602 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:21:14.086358  652196 out.go:177] * Starting "ha-735960" primary control-plane node in "ha-735960" cluster
	I0701 12:21:14.087757  652196 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 12:21:14.087794  652196 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19166-630650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0701 12:21:14.087805  652196 cache.go:56] Caching tarball of preloaded images
	I0701 12:21:14.087882  652196 preload.go:173] Found /home/jenkins/minikube-integration/19166-630650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0701 12:21:14.087892  652196 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0701 12:21:14.088044  652196 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
	I0701 12:21:14.088232  652196 start.go:360] acquireMachinesLock for ha-735960: {Name:mk43a6c0c0c15a237623df377ad65b82186c0ad8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:21:14.088271  652196 start.go:364] duration metric: took 21.615µs to acquireMachinesLock for "ha-735960"
	I0701 12:21:14.088285  652196 start.go:96] Skipping create...Using existing machine configuration
	I0701 12:21:14.088293  652196 fix.go:54] fixHost starting: 
	I0701 12:21:14.088547  652196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:21:14.088578  652196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:21:14.103070  652196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42935
	I0701 12:21:14.103508  652196 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:21:14.104025  652196 main.go:141] libmachine: Using API Version  1
	I0701 12:21:14.104050  652196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:21:14.104424  652196 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:21:14.104649  652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:21:14.104829  652196 main.go:141] libmachine: (ha-735960) Calling .GetState
	I0701 12:21:14.106608  652196 fix.go:112] recreateIfNeeded on ha-735960: state=Stopped err=<nil>
	I0701 12:21:14.106630  652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
	W0701 12:21:14.106790  652196 fix.go:138] unexpected machine state, will restart: <nil>
	I0701 12:21:14.108833  652196 out.go:177] * Restarting existing kvm2 VM for "ha-735960" ...
	I0701 12:21:14.110060  652196 main.go:141] libmachine: (ha-735960) Calling .Start
	I0701 12:21:14.110234  652196 main.go:141] libmachine: (ha-735960) Ensuring networks are active...
	I0701 12:21:14.110976  652196 main.go:141] libmachine: (ha-735960) Ensuring network default is active
	I0701 12:21:14.111299  652196 main.go:141] libmachine: (ha-735960) Ensuring network mk-ha-735960 is active
	I0701 12:21:14.111665  652196 main.go:141] libmachine: (ha-735960) Getting domain xml...
	I0701 12:21:14.112420  652196 main.go:141] libmachine: (ha-735960) Creating domain...
	I0701 12:21:15.307133  652196 main.go:141] libmachine: (ha-735960) Waiting to get IP...
	I0701 12:21:15.308062  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:15.308526  652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:21:15.308647  652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:15.308493  652224 retry.go:31] will retry after 239.111405ms: waiting for machine to come up
	I0701 12:21:15.549211  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:15.549648  652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:21:15.549679  652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:15.549597  652224 retry.go:31] will retry after 248.256131ms: waiting for machine to come up
	I0701 12:21:15.799054  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:15.799481  652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:21:15.799534  652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:15.799422  652224 retry.go:31] will retry after 380.468685ms: waiting for machine to come up
	I0701 12:21:16.181969  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:16.182432  652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:21:16.182634  652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:16.182540  652224 retry.go:31] will retry after 592.847587ms: waiting for machine to come up
	I0701 12:21:16.777393  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:16.777837  652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:21:16.777867  652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:16.777790  652224 retry.go:31] will retry after 639.749416ms: waiting for machine to come up
	I0701 12:21:17.419540  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:17.419941  652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:21:17.419965  652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:17.419916  652224 retry.go:31] will retry after 891.768613ms: waiting for machine to come up
	I0701 12:21:18.312967  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:18.313455  652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:21:18.313484  652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:18.313399  652224 retry.go:31] will retry after 1.112048412s: waiting for machine to come up
	I0701 12:21:19.427190  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:19.427624  652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:21:19.427655  652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:19.427568  652224 retry.go:31] will retry after 1.150138437s: waiting for machine to come up
	I0701 12:21:20.579868  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:20.580291  652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:21:20.580325  652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:20.580216  652224 retry.go:31] will retry after 1.129763596s: waiting for machine to come up
	I0701 12:21:21.711416  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:21.711892  652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:21:21.711924  652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:21.711831  652224 retry.go:31] will retry after 2.143074349s: waiting for machine to come up
	I0701 12:21:23.858081  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:23.858617  652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:21:23.858643  652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:23.858578  652224 retry.go:31] will retry after 2.436757856s: waiting for machine to come up
	I0701 12:21:26.297727  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:26.298302  652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:21:26.298352  652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:26.298269  652224 retry.go:31] will retry after 2.609229165s: waiting for machine to come up
	I0701 12:21:28.911224  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:28.911698  652196 main.go:141] libmachine: (ha-735960) Found IP for machine: 192.168.39.16
	I0701 12:21:28.911722  652196 main.go:141] libmachine: (ha-735960) Reserving static IP address...
	I0701 12:21:28.911731  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has current primary IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:28.912401  652196 main.go:141] libmachine: (ha-735960) Reserved static IP address: 192.168.39.16
	I0701 12:21:28.912425  652196 main.go:141] libmachine: (ha-735960) Waiting for SSH to be available...
	I0701 12:21:28.912468  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "ha-735960", mac: "52:54:00:6c:20:7c", ip: "192.168.39.16"} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:28.912492  652196 main.go:141] libmachine: (ha-735960) DBG | skip adding static IP to network mk-ha-735960 - found existing host DHCP lease matching {name: "ha-735960", mac: "52:54:00:6c:20:7c", ip: "192.168.39.16"}
	I0701 12:21:28.912507  652196 main.go:141] libmachine: (ha-735960) DBG | Getting to WaitForSSH function...
	I0701 12:21:28.914934  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:28.915448  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:28.915478  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:28.915627  652196 main.go:141] libmachine: (ha-735960) DBG | Using SSH client type: external
	I0701 12:21:28.915655  652196 main.go:141] libmachine: (ha-735960) DBG | Using SSH private key: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa (-rw-------)
	I0701 12:21:28.915680  652196 main.go:141] libmachine: (ha-735960) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.16 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0701 12:21:28.915698  652196 main.go:141] libmachine: (ha-735960) DBG | About to run SSH command:
	I0701 12:21:28.915730  652196 main.go:141] libmachine: (ha-735960) DBG | exit 0
	I0701 12:21:29.042314  652196 main.go:141] libmachine: (ha-735960) DBG | SSH cmd err, output: <nil>: 
	I0701 12:21:29.042747  652196 main.go:141] libmachine: (ha-735960) Calling .GetConfigRaw
	I0701 12:21:29.043414  652196 main.go:141] libmachine: (ha-735960) Calling .GetIP
	I0701 12:21:29.046291  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.046689  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:29.046714  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.046967  652196 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
	I0701 12:21:29.047187  652196 machine.go:94] provisionDockerMachine start ...
	I0701 12:21:29.047211  652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:21:29.047467  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:21:29.049524  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.049899  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:29.049924  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.050040  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:21:29.050240  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:29.050477  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:29.050669  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:21:29.050868  652196 main.go:141] libmachine: Using SSH client type: native
	I0701 12:21:29.051073  652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0701 12:21:29.051086  652196 main.go:141] libmachine: About to run SSH command:
	hostname
	I0701 12:21:29.166645  652196 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0701 12:21:29.166687  652196 main.go:141] libmachine: (ha-735960) Calling .GetMachineName
	I0701 12:21:29.166983  652196 buildroot.go:166] provisioning hostname "ha-735960"
	I0701 12:21:29.167013  652196 main.go:141] libmachine: (ha-735960) Calling .GetMachineName
	I0701 12:21:29.167232  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:21:29.169829  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.170228  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:29.170260  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.170403  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:21:29.170603  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:29.170773  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:29.170913  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:21:29.171082  652196 main.go:141] libmachine: Using SSH client type: native
	I0701 12:21:29.171259  652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0701 12:21:29.171270  652196 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-735960 && echo "ha-735960" | sudo tee /etc/hostname
	I0701 12:21:29.295697  652196 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-735960
	
	I0701 12:21:29.295728  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:21:29.298625  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.299014  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:29.299041  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.299233  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:21:29.299434  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:29.299641  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:29.299795  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:21:29.299954  652196 main.go:141] libmachine: Using SSH client type: native
	I0701 12:21:29.300149  652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0701 12:21:29.300171  652196 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-735960' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-735960/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-735960' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0701 12:21:29.418489  652196 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0701 12:21:29.418522  652196 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19166-630650/.minikube CaCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19166-630650/.minikube}
	I0701 12:21:29.418577  652196 buildroot.go:174] setting up certificates
	I0701 12:21:29.418593  652196 provision.go:84] configureAuth start
	I0701 12:21:29.418612  652196 main.go:141] libmachine: (ha-735960) Calling .GetMachineName
	I0701 12:21:29.418889  652196 main.go:141] libmachine: (ha-735960) Calling .GetIP
	I0701 12:21:29.421815  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.422238  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:29.422275  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.422477  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:21:29.424787  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.425187  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:29.425216  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.425427  652196 provision.go:143] copyHostCerts
	I0701 12:21:29.425466  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem
	I0701 12:21:29.425530  652196 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem, removing ...
	I0701 12:21:29.425542  652196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem
	I0701 12:21:29.425624  652196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem (1078 bytes)
	I0701 12:21:29.425732  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem
	I0701 12:21:29.425753  652196 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem, removing ...
	I0701 12:21:29.425758  652196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem
	I0701 12:21:29.425798  652196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem (1123 bytes)
	I0701 12:21:29.425856  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem
	I0701 12:21:29.425872  652196 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem, removing ...
	I0701 12:21:29.425877  652196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem
	I0701 12:21:29.425897  652196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem (1675 bytes)
	I0701 12:21:29.425958  652196 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem org=jenkins.ha-735960 san=[127.0.0.1 192.168.39.16 ha-735960 localhost minikube]
	I0701 12:21:29.592360  652196 provision.go:177] copyRemoteCerts
	I0701 12:21:29.592437  652196 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0701 12:21:29.592463  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:21:29.595489  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.595884  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:29.595908  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.596131  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:21:29.596356  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:29.596515  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:21:29.596646  652196 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa Username:docker}
	I0701 12:21:29.684124  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0701 12:21:29.684214  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0701 12:21:29.707185  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0701 12:21:29.707254  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0701 12:21:29.729605  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0701 12:21:29.729687  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0701 12:21:29.751505  652196 provision.go:87] duration metric: took 332.894756ms to configureAuth
	I0701 12:21:29.751536  652196 buildroot.go:189] setting minikube options for container-runtime
	I0701 12:21:29.751802  652196 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:21:29.751834  652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:21:29.752179  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:21:29.754903  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.755331  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:29.755367  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.755494  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:21:29.755709  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:29.755868  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:29.756016  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:21:29.756168  652196 main.go:141] libmachine: Using SSH client type: native
	I0701 12:21:29.756341  652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0701 12:21:29.756351  652196 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0701 12:21:29.867557  652196 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0701 12:21:29.867582  652196 buildroot.go:70] root file system type: tmpfs
	I0701 12:21:29.867738  652196 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0701 12:21:29.867768  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:21:29.870702  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.871111  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:29.871152  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.871294  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:21:29.871532  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:29.871806  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:29.871989  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:21:29.872177  652196 main.go:141] libmachine: Using SSH client type: native
	I0701 12:21:29.872347  652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0701 12:21:29.872410  652196 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0701 12:21:29.995623  652196 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0701 12:21:29.995671  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:21:29.998574  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.998969  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:29.999001  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:29.999184  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:21:29.999403  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:29.999598  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:29.999772  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:21:29.999916  652196 main.go:141] libmachine: Using SSH client type: native
	I0701 12:21:30.000093  652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0701 12:21:30.000109  652196 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0701 12:21:31.849411  652196 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0701 12:21:31.849452  652196 machine.go:97] duration metric: took 2.802248138s to provisionDockerMachine
	I0701 12:21:31.849473  652196 start.go:293] postStartSetup for "ha-735960" (driver="kvm2")
	I0701 12:21:31.849487  652196 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0701 12:21:31.849508  652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:21:31.849934  652196 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0701 12:21:31.849982  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:21:31.853029  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:31.853464  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:31.853494  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:31.853656  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:21:31.853877  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:31.854065  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:21:31.854242  652196 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa Username:docker}
	I0701 12:21:31.948096  652196 ssh_runner.go:195] Run: cat /etc/os-release
	I0701 12:21:31.952493  652196 info.go:137] Remote host: Buildroot 2023.02.9
	I0701 12:21:31.952522  652196 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-630650/.minikube/addons for local assets ...
	I0701 12:21:31.952580  652196 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-630650/.minikube/files for local assets ...
	I0701 12:21:31.952654  652196 filesync.go:149] local asset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> 6378542.pem in /etc/ssl/certs
	I0701 12:21:31.952664  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> /etc/ssl/certs/6378542.pem
	I0701 12:21:31.952750  652196 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0701 12:21:31.962034  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem --> /etc/ssl/certs/6378542.pem (1708 bytes)
	I0701 12:21:31.985898  652196 start.go:296] duration metric: took 136.407484ms for postStartSetup
	I0701 12:21:31.985953  652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:21:31.986287  652196 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0701 12:21:31.986316  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:21:31.988934  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:31.989328  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:31.989359  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:31.989497  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:21:31.989724  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:31.989863  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:21:31.990038  652196 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa Username:docker}
	I0701 12:21:32.076710  652196 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0701 12:21:32.076807  652196 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0701 12:21:32.133792  652196 fix.go:56] duration metric: took 18.045488816s for fixHost
	I0701 12:21:32.133863  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:21:32.136703  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:32.137078  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:32.137110  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:32.137321  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:21:32.137591  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:32.137793  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:32.137963  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:21:32.138201  652196 main.go:141] libmachine: Using SSH client type: native
	I0701 12:21:32.138518  652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0701 12:21:32.138541  652196 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0701 12:21:32.254973  652196 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719836492.215186729
	
	I0701 12:21:32.255001  652196 fix.go:216] guest clock: 1719836492.215186729
	I0701 12:21:32.255007  652196 fix.go:229] Guest: 2024-07-01 12:21:32.215186729 +0000 UTC Remote: 2024-07-01 12:21:32.133836118 +0000 UTC m=+18.172225533 (delta=81.350611ms)
	I0701 12:21:32.255027  652196 fix.go:200] guest clock delta is within tolerance: 81.350611ms
	I0701 12:21:32.255032  652196 start.go:83] releasing machines lock for "ha-735960", held for 18.166751927s
	I0701 12:21:32.255050  652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:21:32.255338  652196 main.go:141] libmachine: (ha-735960) Calling .GetIP
	I0701 12:21:32.258091  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:32.258459  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:32.258481  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:32.258679  652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:21:32.259224  652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:21:32.259383  652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:21:32.259520  652196 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0701 12:21:32.259564  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:21:32.259693  652196 ssh_runner.go:195] Run: cat /version.json
	I0701 12:21:32.259718  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:21:32.262127  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:32.262481  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:32.262518  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:32.262538  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:32.262653  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:21:32.262845  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:32.263031  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:21:32.263054  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:32.263074  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:32.263215  652196 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa Username:docker}
	I0701 12:21:32.263229  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:21:32.263398  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:21:32.263547  652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:21:32.263699  652196 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa Username:docker}
	I0701 12:21:32.343012  652196 ssh_runner.go:195] Run: systemctl --version
	I0701 12:21:32.428409  652196 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0701 12:21:32.433742  652196 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0701 12:21:32.433815  652196 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0701 12:21:32.449052  652196 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0701 12:21:32.449087  652196 start.go:494] detecting cgroup driver to use...
	I0701 12:21:32.449338  652196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 12:21:32.471651  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0701 12:21:32.481832  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0701 12:21:32.491470  652196 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0701 12:21:32.491548  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0701 12:21:32.501229  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0701 12:21:32.511119  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0701 12:21:32.520826  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0701 12:21:32.530559  652196 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0701 12:21:32.542109  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0701 12:21:32.551821  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0701 12:21:32.561403  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0701 12:21:32.571068  652196 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0701 12:21:32.579813  652196 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0701 12:21:32.588595  652196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:21:32.705377  652196 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0701 12:21:32.724169  652196 start.go:494] detecting cgroup driver to use...
	I0701 12:21:32.724285  652196 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0701 12:21:32.739050  652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 12:21:32.753169  652196 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0701 12:21:32.769805  652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 12:21:32.783750  652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 12:21:32.797509  652196 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0701 12:21:32.821510  652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 12:21:32.835901  652196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 12:21:32.854192  652196 ssh_runner.go:195] Run: which cri-dockerd
	I0701 12:21:32.858039  652196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0701 12:21:32.867652  652196 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0701 12:21:32.884216  652196 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0701 12:21:33.001636  652196 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0701 12:21:33.121229  652196 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0701 12:21:33.121419  652196 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0701 12:21:33.138482  652196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:21:33.262395  652196 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0701 12:21:35.714549  652196 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.452099351s)
	I0701 12:21:35.714642  652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0701 12:21:35.727946  652196 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0701 12:21:35.744089  652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0701 12:21:35.757426  652196 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0701 12:21:35.868089  652196 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0701 12:21:35.989857  652196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:21:36.121343  652196 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0701 12:21:36.138520  652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0701 12:21:36.152026  652196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:21:36.271312  652196 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0701 12:21:36.351567  652196 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0701 12:21:36.351668  652196 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0701 12:21:36.357143  652196 start.go:562] Will wait 60s for crictl version
	I0701 12:21:36.357212  652196 ssh_runner.go:195] Run: which crictl
	I0701 12:21:36.361384  652196 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0701 12:21:36.400372  652196 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.1
	RuntimeApiVersion:  v1
	I0701 12:21:36.400446  652196 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0701 12:21:36.427941  652196 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0701 12:21:36.456620  652196 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.1 ...
	I0701 12:21:36.456687  652196 main.go:141] libmachine: (ha-735960) Calling .GetIP
	I0701 12:21:36.459384  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:36.459752  652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:21:36.459781  652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:21:36.459970  652196 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0701 12:21:36.463956  652196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 12:21:36.476676  652196 kubeadm.go:877] updating cluster {Name:ha-735960 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cl
usterName:ha-735960 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.60 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:fa
lse freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0701 12:21:36.476851  652196 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 12:21:36.476914  652196 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0701 12:21:36.493466  652196 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	kindest/kindnetd:v20240513-cd2ac642
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0701 12:21:36.493530  652196 docker.go:615] Images already preloaded, skipping extraction
	I0701 12:21:36.493620  652196 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0701 12:21:36.510908  652196 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	kindest/kindnetd:v20240513-cd2ac642
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0701 12:21:36.510939  652196 cache_images.go:84] Images are preloaded, skipping loading
	I0701 12:21:36.510952  652196 kubeadm.go:928] updating node { 192.168.39.16 8443 v1.30.2 docker true true} ...
	I0701 12:21:36.511079  652196 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-735960 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.16
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-735960 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0701 12:21:36.511139  652196 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0701 12:21:36.536408  652196 cni.go:84] Creating CNI manager for ""
	I0701 12:21:36.536430  652196 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0701 12:21:36.536441  652196 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0701 12:21:36.536470  652196 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.16 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-735960 NodeName:ha-735960 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.16"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.16 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0701 12:21:36.536633  652196 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.16
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-735960"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.16
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.16"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0701 12:21:36.536656  652196 kube-vip.go:115] generating kube-vip config ...
	I0701 12:21:36.536698  652196 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0701 12:21:36.551906  652196 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0701 12:21:36.552024  652196 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0701 12:21:36.552078  652196 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0701 12:21:36.561989  652196 binaries.go:44] Found k8s binaries, skipping transfer
	I0701 12:21:36.562059  652196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0701 12:21:36.571281  652196 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0701 12:21:36.587480  652196 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0701 12:21:36.603596  652196 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0701 12:21:36.621063  652196 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0701 12:21:36.637192  652196 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0701 12:21:36.640909  652196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 12:21:36.652690  652196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:21:36.768142  652196 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0701 12:21:36.786625  652196 certs.go:68] Setting up /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960 for IP: 192.168.39.16
	I0701 12:21:36.786655  652196 certs.go:194] generating shared ca certs ...
	I0701 12:21:36.786676  652196 certs.go:226] acquiring lock for ca certs: {Name:mk34e166bfd069e523b2325e14d1812c523bff53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:21:36.786854  652196 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key
	I0701 12:21:36.786904  652196 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key
	I0701 12:21:36.786915  652196 certs.go:256] generating profile certs ...
	I0701 12:21:36.787017  652196 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/client.key
	I0701 12:21:36.787046  652196 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key.5c21f4af
	I0701 12:21:36.787059  652196 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt.5c21f4af with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.16 192.168.39.86 192.168.39.97 192.168.39.254]
	I0701 12:21:37.059263  652196 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt.5c21f4af ...
	I0701 12:21:37.059305  652196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt.5c21f4af: {Name:mk1be9dc4667506ac6fdcfb1e313edd1292fe7f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:21:37.059483  652196 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key.5c21f4af ...
	I0701 12:21:37.059496  652196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key.5c21f4af: {Name:mkf9220e489bd04f035dab270c790bb3448ca6be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:21:37.059596  652196 certs.go:381] copying /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt.5c21f4af -> /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt
	I0701 12:21:37.059809  652196 certs.go:385] copying /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key.5c21f4af -> /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key
	I0701 12:21:37.059969  652196 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.key
	I0701 12:21:37.059987  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0701 12:21:37.060000  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0701 12:21:37.060014  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0701 12:21:37.060026  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0701 12:21:37.060038  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0701 12:21:37.060054  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0701 12:21:37.060066  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0701 12:21:37.060077  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0701 12:21:37.060165  652196 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem (1338 bytes)
	W0701 12:21:37.060197  652196 certs.go:480] ignoring /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854_empty.pem, impossibly tiny 0 bytes
	I0701 12:21:37.060207  652196 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem (1675 bytes)
	I0701 12:21:37.060228  652196 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem (1078 bytes)
	I0701 12:21:37.060248  652196 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem (1123 bytes)
	I0701 12:21:37.060270  652196 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem (1675 bytes)
	I0701 12:21:37.060305  652196 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem (1708 bytes)
	I0701 12:21:37.060331  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem -> /usr/share/ca-certificates/637854.pem
	I0701 12:21:37.060347  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> /usr/share/ca-certificates/6378542.pem
	I0701 12:21:37.060359  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:21:37.061045  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0701 12:21:37.111708  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0701 12:21:37.168649  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0701 12:21:37.204675  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0701 12:21:37.241167  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0701 12:21:37.265225  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0701 12:21:37.288613  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0701 12:21:37.312645  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0701 12:21:37.337494  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem --> /usr/share/ca-certificates/637854.pem (1338 bytes)
	I0701 12:21:37.361044  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem --> /usr/share/ca-certificates/6378542.pem (1708 bytes)
	I0701 12:21:37.385424  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0701 12:21:37.409054  652196 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0701 12:21:37.426602  652196 ssh_runner.go:195] Run: openssl version
	I0701 12:21:37.432129  652196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0701 12:21:37.442695  652196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:21:37.447331  652196 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  1 12:05 /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:21:37.447415  652196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:21:37.453215  652196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0701 12:21:37.464086  652196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/637854.pem && ln -fs /usr/share/ca-certificates/637854.pem /etc/ssl/certs/637854.pem"
	I0701 12:21:37.474527  652196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/637854.pem
	I0701 12:21:37.479057  652196 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  1 12:11 /usr/share/ca-certificates/637854.pem
	I0701 12:21:37.479123  652196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/637854.pem
	I0701 12:21:37.484641  652196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/637854.pem /etc/ssl/certs/51391683.0"
	I0701 12:21:37.495175  652196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6378542.pem && ln -fs /usr/share/ca-certificates/6378542.pem /etc/ssl/certs/6378542.pem"
	I0701 12:21:37.505961  652196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6378542.pem
	I0701 12:21:37.510286  652196 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  1 12:11 /usr/share/ca-certificates/6378542.pem
	I0701 12:21:37.510365  652196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6378542.pem
	I0701 12:21:37.516124  652196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6378542.pem /etc/ssl/certs/3ec20f2e.0"
	I0701 12:21:37.527154  652196 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0701 12:21:37.532024  652196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0701 12:21:37.538145  652196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0701 12:21:37.544280  652196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0701 12:21:37.550448  652196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0701 12:21:37.556356  652196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0701 12:21:37.562174  652196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0701 12:21:37.568144  652196 kubeadm.go:391] StartCluster: {Name:ha-735960 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-735960 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.60 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false
freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 12:21:37.568362  652196 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0701 12:21:37.586457  652196 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0701 12:21:37.596129  652196 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0701 12:21:37.596158  652196 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0701 12:21:37.596164  652196 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0701 12:21:37.596237  652196 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0701 12:21:37.605715  652196 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0701 12:21:37.606193  652196 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-735960" does not appear in /home/jenkins/minikube-integration/19166-630650/kubeconfig
	I0701 12:21:37.606354  652196 kubeconfig.go:62] /home/jenkins/minikube-integration/19166-630650/kubeconfig needs updating (will repair): [kubeconfig missing "ha-735960" cluster setting kubeconfig missing "ha-735960" context setting]
	I0701 12:21:37.606708  652196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19166-630650/kubeconfig: {Name:mke3ef9d019eff4edd273b00c416fd77ed009242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:21:37.607135  652196 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19166-630650/kubeconfig
	I0701 12:21:37.607365  652196 kapi.go:59] client config for ha-735960: &rest.Config{Host:"https://192.168.39.16:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/client.crt", KeyFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/client.key", CAFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(ni
l)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfbb80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0701 12:21:37.607752  652196 cert_rotation.go:137] Starting client certificate rotation controller
	I0701 12:21:37.608047  652196 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0701 12:21:37.617685  652196 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.16
	I0701 12:21:37.617715  652196 kubeadm.go:591] duration metric: took 21.544408ms to restartPrimaryControlPlane
	I0701 12:21:37.617725  652196 kubeadm.go:393] duration metric: took 49.593354ms to StartCluster
	I0701 12:21:37.617748  652196 settings.go:142] acquiring lock: {Name:mk6f7c85ea77a73ff0ac851454721f2e6e309153 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:21:37.617834  652196 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19166-630650/kubeconfig
	I0701 12:21:37.618535  652196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19166-630650/kubeconfig: {Name:mke3ef9d019eff4edd273b00c416fd77ed009242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:21:37.618754  652196 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:21:37.618777  652196 start.go:240] waiting for startup goroutines ...
	I0701 12:21:37.618792  652196 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0701 12:21:37.619028  652196 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:21:37.621683  652196 out.go:177] * Enabled addons: 
	I0701 12:21:37.622979  652196 addons.go:510] duration metric: took 4.192015ms for enable addons: enabled=[]
	I0701 12:21:37.623011  652196 start.go:245] waiting for cluster config update ...
	I0701 12:21:37.623019  652196 start.go:254] writing updated cluster config ...
	I0701 12:21:37.624600  652196 out.go:177] 
	I0701 12:21:37.626023  652196 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:21:37.626124  652196 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
	I0701 12:21:37.627745  652196 out.go:177] * Starting "ha-735960-m02" control-plane node in "ha-735960" cluster
	I0701 12:21:37.628946  652196 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 12:21:37.628969  652196 cache.go:56] Caching tarball of preloaded images
	I0701 12:21:37.629060  652196 preload.go:173] Found /home/jenkins/minikube-integration/19166-630650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0701 12:21:37.629072  652196 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0701 12:21:37.629161  652196 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
	I0701 12:21:37.629353  652196 start.go:360] acquireMachinesLock for ha-735960-m02: {Name:mk43a6c0c0c15a237623df377ad65b82186c0ad8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:21:37.629411  652196 start.go:364] duration metric: took 31.79µs to acquireMachinesLock for "ha-735960-m02"
	I0701 12:21:37.629427  652196 start.go:96] Skipping create...Using existing machine configuration
	I0701 12:21:37.629440  652196 fix.go:54] fixHost starting: m02
	I0701 12:21:37.629698  652196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:21:37.629747  652196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:21:37.644981  652196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41663
	I0701 12:21:37.645473  652196 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:21:37.645947  652196 main.go:141] libmachine: Using API Version  1
	I0701 12:21:37.645969  652196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:21:37.646284  652196 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:21:37.646523  652196 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:21:37.646646  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetState
	I0701 12:21:37.648195  652196 fix.go:112] recreateIfNeeded on ha-735960-m02: state=Stopped err=<nil>
	I0701 12:21:37.648228  652196 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	W0701 12:21:37.648406  652196 fix.go:138] unexpected machine state, will restart: <nil>
	I0701 12:21:37.650489  652196 out.go:177] * Restarting existing kvm2 VM for "ha-735960-m02" ...
	I0701 12:21:37.651975  652196 main.go:141] libmachine: (ha-735960-m02) Calling .Start
	I0701 12:21:37.652186  652196 main.go:141] libmachine: (ha-735960-m02) Ensuring networks are active...
	I0701 12:21:37.652916  652196 main.go:141] libmachine: (ha-735960-m02) Ensuring network default is active
	I0701 12:21:37.653282  652196 main.go:141] libmachine: (ha-735960-m02) Ensuring network mk-ha-735960 is active
	I0701 12:21:37.653613  652196 main.go:141] libmachine: (ha-735960-m02) Getting domain xml...
	I0701 12:21:37.654254  652196 main.go:141] libmachine: (ha-735960-m02) Creating domain...
	I0701 12:21:38.852369  652196 main.go:141] libmachine: (ha-735960-m02) Waiting to get IP...
	I0701 12:21:38.853358  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:38.853762  652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:21:38.853832  652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:38.853747  652384 retry.go:31] will retry after 295.798088ms: waiting for machine to come up
	I0701 12:21:39.151332  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:39.151886  652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:21:39.151912  652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:39.151845  652384 retry.go:31] will retry after 255.18729ms: waiting for machine to come up
	I0701 12:21:39.408310  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:39.408739  652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:21:39.408792  652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:39.408689  652384 retry.go:31] will retry after 457.740061ms: waiting for machine to come up
	I0701 12:21:39.868295  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:39.868702  652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:21:39.868736  652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:39.868629  652384 retry.go:31] will retry after 548.674851ms: waiting for machine to come up
	I0701 12:21:40.419597  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:40.420069  652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:21:40.420100  652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:40.420009  652384 retry.go:31] will retry after 755.113146ms: waiting for machine to come up
	I0701 12:21:41.176960  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:41.177380  652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:21:41.177429  652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:41.177309  652384 retry.go:31] will retry after 739.288718ms: waiting for machine to come up
	I0701 12:21:41.918305  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:41.918853  652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:21:41.918884  652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:41.918789  652384 retry.go:31] will retry after 722.041404ms: waiting for machine to come up
	I0701 12:21:42.642704  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:42.643188  652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:21:42.643219  652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:42.643113  652384 retry.go:31] will retry after 1.139279839s: waiting for machine to come up
	I0701 12:21:43.784719  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:43.785159  652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:21:43.785193  652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:43.785114  652384 retry.go:31] will retry after 1.276779849s: waiting for machine to come up
	I0701 12:21:45.063522  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:45.064026  652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:21:45.064058  652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:45.063969  652384 retry.go:31] will retry after 2.284492799s: waiting for machine to come up
	I0701 12:21:47.351530  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:47.352076  652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:21:47.352113  652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:47.351988  652384 retry.go:31] will retry after 2.171521184s: waiting for machine to come up
	I0701 12:21:49.526162  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:49.526566  652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:21:49.526590  652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:49.526523  652384 retry.go:31] will retry after 3.533181759s: waiting for machine to come up
	I0701 12:21:53.061482  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.062025  652196 main.go:141] libmachine: (ha-735960-m02) Found IP for machine: 192.168.39.86
	I0701 12:21:53.062048  652196 main.go:141] libmachine: (ha-735960-m02) Reserving static IP address...
	I0701 12:21:53.062060  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has current primary IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.062473  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "ha-735960-m02", mac: "52:54:00:0b:2f:ce", ip: "192.168.39.86"} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:53.062504  652196 main.go:141] libmachine: (ha-735960-m02) DBG | skip adding static IP to network mk-ha-735960 - found existing host DHCP lease matching {name: "ha-735960-m02", mac: "52:54:00:0b:2f:ce", ip: "192.168.39.86"}
	I0701 12:21:53.062534  652196 main.go:141] libmachine: (ha-735960-m02) Reserved static IP address: 192.168.39.86
	I0701 12:21:53.062554  652196 main.go:141] libmachine: (ha-735960-m02) Waiting for SSH to be available...
	I0701 12:21:53.062566  652196 main.go:141] libmachine: (ha-735960-m02) DBG | Getting to WaitForSSH function...
	I0701 12:21:53.064461  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.064796  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:53.064828  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.064893  652196 main.go:141] libmachine: (ha-735960-m02) DBG | Using SSH client type: external
	I0701 12:21:53.064938  652196 main.go:141] libmachine: (ha-735960-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa (-rw-------)
	I0701 12:21:53.064965  652196 main.go:141] libmachine: (ha-735960-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.86 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0701 12:21:53.064981  652196 main.go:141] libmachine: (ha-735960-m02) DBG | About to run SSH command:
	I0701 12:21:53.065000  652196 main.go:141] libmachine: (ha-735960-m02) DBG | exit 0
	I0701 12:21:53.190266  652196 main.go:141] libmachine: (ha-735960-m02) DBG | SSH cmd err, output: <nil>: 
	I0701 12:21:53.190636  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetConfigRaw
	I0701 12:21:53.191272  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetIP
	I0701 12:21:53.193658  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.193994  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:53.194027  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.194274  652196 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
	I0701 12:21:53.194544  652196 machine.go:94] provisionDockerMachine start ...
	I0701 12:21:53.194562  652196 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:21:53.194814  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:21:53.196894  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.197262  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:53.197291  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.197414  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:21:53.197654  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:53.197829  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:53.198021  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:21:53.198185  652196 main.go:141] libmachine: Using SSH client type: native
	I0701 12:21:53.198432  652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0701 12:21:53.198448  652196 main.go:141] libmachine: About to run SSH command:
	hostname
	I0701 12:21:53.306480  652196 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0701 12:21:53.306526  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetMachineName
	I0701 12:21:53.306839  652196 buildroot.go:166] provisioning hostname "ha-735960-m02"
	I0701 12:21:53.306870  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetMachineName
	I0701 12:21:53.307063  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:21:53.309645  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.310086  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:53.310116  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.310307  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:21:53.310514  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:53.310689  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:53.310820  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:21:53.310997  652196 main.go:141] libmachine: Using SSH client type: native
	I0701 12:21:53.311210  652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0701 12:21:53.311225  652196 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-735960-m02 && echo "ha-735960-m02" | sudo tee /etc/hostname
	I0701 12:21:53.434956  652196 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-735960-m02
	
	I0701 12:21:53.434992  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:21:53.437612  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.438016  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:53.438040  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.438190  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:21:53.438418  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:53.438601  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:53.438768  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:21:53.438926  652196 main.go:141] libmachine: Using SSH client type: native
	I0701 12:21:53.439106  652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0701 12:21:53.439128  652196 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-735960-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-735960-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-735960-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0701 12:21:53.559115  652196 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0701 12:21:53.559146  652196 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19166-630650/.minikube CaCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19166-630650/.minikube}
	I0701 12:21:53.559163  652196 buildroot.go:174] setting up certificates
	I0701 12:21:53.559174  652196 provision.go:84] configureAuth start
	I0701 12:21:53.559186  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetMachineName
	I0701 12:21:53.559514  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetIP
	I0701 12:21:53.562119  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.562516  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:53.562550  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.562753  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:21:53.564741  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.565063  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:53.565082  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.565233  652196 provision.go:143] copyHostCerts
	I0701 12:21:53.565266  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem
	I0701 12:21:53.565309  652196 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem, removing ...
	I0701 12:21:53.565318  652196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem
	I0701 12:21:53.565379  652196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem (1078 bytes)
	I0701 12:21:53.565450  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem
	I0701 12:21:53.565468  652196 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem, removing ...
	I0701 12:21:53.565474  652196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem
	I0701 12:21:53.565492  652196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem (1123 bytes)
	I0701 12:21:53.565533  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem
	I0701 12:21:53.565549  652196 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem, removing ...
	I0701 12:21:53.565555  652196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem
	I0701 12:21:53.565570  652196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem (1675 bytes)
	I0701 12:21:53.565618  652196 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem org=jenkins.ha-735960-m02 san=[127.0.0.1 192.168.39.86 ha-735960-m02 localhost minikube]
	I0701 12:21:53.749696  652196 provision.go:177] copyRemoteCerts
	I0701 12:21:53.749755  652196 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0701 12:21:53.749780  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:21:53.752460  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.752780  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:53.752813  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.752952  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:21:53.753159  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:53.753385  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:21:53.753547  652196 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa Username:docker}
	I0701 12:21:53.835990  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0701 12:21:53.836060  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0701 12:21:53.858665  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0701 12:21:53.858753  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0701 12:21:53.880281  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0701 12:21:53.880367  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0701 12:21:53.902677  652196 provision.go:87] duration metric: took 343.48703ms to configureAuth
	I0701 12:21:53.902709  652196 buildroot.go:189] setting minikube options for container-runtime
	I0701 12:21:53.903020  652196 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:21:53.903053  652196 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:21:53.903351  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:21:53.905929  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.906189  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:53.906216  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:53.906438  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:21:53.906667  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:53.906826  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:53.906966  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:21:53.907119  652196 main.go:141] libmachine: Using SSH client type: native
	I0701 12:21:53.907282  652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0701 12:21:53.907294  652196 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0701 12:21:54.019474  652196 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0701 12:21:54.019501  652196 buildroot.go:70] root file system type: tmpfs
	I0701 12:21:54.019656  652196 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0701 12:21:54.019681  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:21:54.022816  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:54.023184  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:54.023208  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:54.023371  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:21:54.023579  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:54.023787  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:54.023946  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:21:54.024146  652196 main.go:141] libmachine: Using SSH client type: native
	I0701 12:21:54.024319  652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0701 12:21:54.024384  652196 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.39.16"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0701 12:21:54.147740  652196 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.39.16
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0701 12:21:54.147778  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:21:54.150547  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:54.151173  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:54.151208  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:54.151345  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:21:54.151561  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:54.151771  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:54.151918  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:21:54.152095  652196 main.go:141] libmachine: Using SSH client type: native
	I0701 12:21:54.152266  652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0701 12:21:54.152281  652196 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0701 12:21:56.028628  652196 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0701 12:21:56.028682  652196 machine.go:97] duration metric: took 2.834118436s to provisionDockerMachine
	I0701 12:21:56.028701  652196 start.go:293] postStartSetup for "ha-735960-m02" (driver="kvm2")
	I0701 12:21:56.028716  652196 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0701 12:21:56.028738  652196 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:21:56.029099  652196 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0701 12:21:56.029132  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:21:56.031882  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:56.032264  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:56.032289  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:56.032433  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:21:56.032608  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:56.032817  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:21:56.032971  652196 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa Username:docker}
	I0701 12:21:56.117309  652196 ssh_runner.go:195] Run: cat /etc/os-release
	I0701 12:21:56.121231  652196 info.go:137] Remote host: Buildroot 2023.02.9
	I0701 12:21:56.121263  652196 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-630650/.minikube/addons for local assets ...
	I0701 12:21:56.121324  652196 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-630650/.minikube/files for local assets ...
	I0701 12:21:56.121391  652196 filesync.go:149] local asset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> 6378542.pem in /etc/ssl/certs
	I0701 12:21:56.121402  652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> /etc/ssl/certs/6378542.pem
	I0701 12:21:56.121478  652196 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0701 12:21:56.130302  652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem --> /etc/ssl/certs/6378542.pem (1708 bytes)
	I0701 12:21:56.152776  652196 start.go:296] duration metric: took 124.058691ms for postStartSetup
	I0701 12:21:56.152821  652196 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:21:56.153142  652196 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0701 12:21:56.153170  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:21:56.155689  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:56.156094  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:56.156120  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:56.156332  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:21:56.156555  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:56.156727  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:21:56.156917  652196 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa Username:docker}
	I0701 12:21:56.240391  652196 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0701 12:21:56.240454  652196 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0701 12:21:56.280843  652196 fix.go:56] duration metric: took 18.651393475s for fixHost
	I0701 12:21:56.280895  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:21:56.283268  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:56.283590  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:56.283617  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:56.283860  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:21:56.284107  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:56.284307  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:56.284501  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:21:56.284686  652196 main.go:141] libmachine: Using SSH client type: native
	I0701 12:21:56.284888  652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0701 12:21:56.284903  652196 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0701 12:21:56.398873  652196 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719836516.359963406
	
	I0701 12:21:56.398893  652196 fix.go:216] guest clock: 1719836516.359963406
	I0701 12:21:56.398901  652196 fix.go:229] Guest: 2024-07-01 12:21:56.359963406 +0000 UTC Remote: 2024-07-01 12:21:56.280872467 +0000 UTC m=+42.319261894 (delta=79.090939ms)
	I0701 12:21:56.398919  652196 fix.go:200] guest clock delta is within tolerance: 79.090939ms
	I0701 12:21:56.398924  652196 start.go:83] releasing machines lock for "ha-735960-m02", held for 18.769503298s
	I0701 12:21:56.398940  652196 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:21:56.399198  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetIP
	I0701 12:21:56.401982  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:56.402404  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:56.402436  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:56.404680  652196 out.go:177] * Found network options:
	I0701 12:21:56.406167  652196 out.go:177]   - NO_PROXY=192.168.39.16
	W0701 12:21:56.407620  652196 proxy.go:119] fail to check proxy env: Error ip not in block
	I0701 12:21:56.407664  652196 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:21:56.408285  652196 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:21:56.408498  652196 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:21:56.408606  652196 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0701 12:21:56.408647  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	W0701 12:21:56.408741  652196 proxy.go:119] fail to check proxy env: Error ip not in block
	I0701 12:21:56.408826  652196 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0701 12:21:56.408849  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:21:56.411170  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:56.411559  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:56.411598  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:56.411651  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:56.411933  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:21:56.412130  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:56.412221  652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:21:56.412247  652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:21:56.412295  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:21:56.412519  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:21:56.412508  652196 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa Username:docker}
	I0701 12:21:56.412720  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:21:56.412871  652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:21:56.412987  652196 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa Username:docker}
	W0701 12:21:56.492511  652196 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0701 12:21:56.492595  652196 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0701 12:21:56.515270  652196 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0701 12:21:56.515305  652196 start.go:494] detecting cgroup driver to use...
	I0701 12:21:56.515419  652196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 12:21:56.549004  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0701 12:21:56.560711  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0701 12:21:56.578763  652196 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0701 12:21:56.578832  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0701 12:21:56.589742  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0701 12:21:56.606645  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0701 12:21:56.620036  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0701 12:21:56.632033  652196 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0701 12:21:56.642458  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0701 12:21:56.653078  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0701 12:21:56.663035  652196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0701 12:21:56.673203  652196 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0701 12:21:56.682348  652196 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0701 12:21:56.691388  652196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:21:56.798709  652196 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0701 12:21:56.821386  652196 start.go:494] detecting cgroup driver to use...
	I0701 12:21:56.821493  652196 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0701 12:21:56.841303  652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 12:21:56.857934  652196 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0701 12:21:56.877318  652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 12:21:56.889777  652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 12:21:56.901844  652196 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0701 12:21:56.927595  652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 12:21:56.940849  652196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 12:21:56.958116  652196 ssh_runner.go:195] Run: which cri-dockerd
	I0701 12:21:56.961664  652196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0701 12:21:56.969985  652196 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0701 12:21:56.985048  652196 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0701 12:21:57.096072  652196 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0701 12:21:57.211289  652196 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0701 12:21:57.211354  652196 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0701 12:21:57.227069  652196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:21:57.341292  652196 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0701 12:22:58.423195  652196 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.08185799s)
	I0701 12:22:58.423268  652196 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0701 12:22:58.444321  652196 out.go:177] 
	W0701 12:22:58.445678  652196 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 01 12:21:54 ha-735960-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 01 12:21:54 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:54.524329635Z" level=info msg="Starting up"
	Jul 01 12:21:54 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:54.525054987Z" level=info msg="containerd not running, starting managed containerd"
	Jul 01 12:21:54 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:54.525787354Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=513
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.553695593Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.572290393Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.572432449Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.572518940Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.572558429Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.572981597Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.573093539Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.573355911Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.573425452Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.573469593Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.573505057Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.573782642Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.574848351Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.576951334Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.577031827Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.577253828Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.577304329Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.577551634Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.577624370Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.577665230Z" level=info msg="metadata content store policy set" policy=shared
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.580979416Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.581128476Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.581284824Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.581371031Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.581432559Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.581524784Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.581996275Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582118070Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582162131Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582245548Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582319648Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582368655Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582407448Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582445279Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582484550Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582521928Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582558472Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582601035Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582656126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582693985Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582741537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582779033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582815513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582853076Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582892671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582938669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582980248Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583032987Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583083364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583122445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583161506Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583262727Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583333396Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583373579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583414811Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583520612Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583751718Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583800626Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583838317Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583874340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583912430Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583991424Z" level=info msg="NRI interface is disabled by configuration."
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.584364167Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.584467963Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.584654486Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.584785754Z" level=info msg="containerd successfully booted in 0.032655s"
	Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.555699119Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.620790434Z" level=info msg="Loading containers: start."
	Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.813021303Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.888534738Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.940299653Z" level=info msg="Loading containers: done."
	Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.956534314Z" level=info msg="Docker daemon" commit=ff1e2c0 containerd-snapshotter=false storage-driver=overlay2 version=27.0.1
	Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.956851438Z" level=info msg="Daemon has completed initialization"
	Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.988054435Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.988129188Z" level=info msg="API listen on [::]:2376"
	Jul 01 12:21:55 ha-735960-m02 systemd[1]: Started Docker Application Container Engine.
	Jul 01 12:21:57 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:57.316115209Z" level=info msg="Processing signal 'terminated'"
	Jul 01 12:21:57 ha-735960-m02 systemd[1]: Stopping Docker Application Container Engine...
	Jul 01 12:21:57 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:57.317321834Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 01 12:21:57 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:57.317386191Z" level=info msg="Daemon shutdown complete"
	Jul 01 12:21:57 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:57.317447382Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 01 12:21:57 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:57.317464543Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 01 12:21:58 ha-735960-m02 systemd[1]: docker.service: Deactivated successfully.
	Jul 01 12:21:58 ha-735960-m02 systemd[1]: Stopped Docker Application Container Engine.
	Jul 01 12:21:58 ha-735960-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 01 12:21:58 ha-735960-m02 dockerd[1188]: time="2024-07-01T12:21:58.364754006Z" level=info msg="Starting up"
	Jul 01 12:22:58 ha-735960-m02 dockerd[1188]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 01 12:22:58 ha-735960-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 01 12:22:58 ha-735960-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 01 12:22:58 ha-735960-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0701 12:22:58.445741  652196 out.go:239] * 
	W0701 12:22:58.447325  652196 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 12:22:58.449434  652196 out.go:177] 
	
	
	==> Docker <==
	Jul 01 12:21:44 ha-735960 dockerd[1190]: time="2024-07-01T12:21:44.208507474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 12:22:05 ha-735960 dockerd[1184]: time="2024-07-01T12:22:05.425890009Z" level=info msg="ignoring event" container=d97b6df80577316a9cf70b2af0f8d52bb2bd7071ff932a8f1f03df9497724786 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 01 12:22:05 ha-735960 dockerd[1190]: time="2024-07-01T12:22:05.426406022Z" level=info msg="shim disconnected" id=d97b6df80577316a9cf70b2af0f8d52bb2bd7071ff932a8f1f03df9497724786 namespace=moby
	Jul 01 12:22:05 ha-735960 dockerd[1190]: time="2024-07-01T12:22:05.427162251Z" level=warning msg="cleaning up after shim disconnected" id=d97b6df80577316a9cf70b2af0f8d52bb2bd7071ff932a8f1f03df9497724786 namespace=moby
	Jul 01 12:22:05 ha-735960 dockerd[1190]: time="2024-07-01T12:22:05.427275716Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 01 12:22:06 ha-735960 dockerd[1190]: time="2024-07-01T12:22:06.439101176Z" level=info msg="shim disconnected" id=ad4259a9c8ee03ff4c6910c68c5c866481fede150d57267cdc957e46aca4ef80 namespace=moby
	Jul 01 12:22:06 ha-735960 dockerd[1184]: time="2024-07-01T12:22:06.441768147Z" level=info msg="ignoring event" container=ad4259a9c8ee03ff4c6910c68c5c866481fede150d57267cdc957e46aca4ef80 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 01 12:22:06 ha-735960 dockerd[1190]: time="2024-07-01T12:22:06.442054407Z" level=warning msg="cleaning up after shim disconnected" id=ad4259a9c8ee03ff4c6910c68c5c866481fede150d57267cdc957e46aca4ef80 namespace=moby
	Jul 01 12:22:06 ha-735960 dockerd[1190]: time="2024-07-01T12:22:06.442214156Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 01 12:22:24 ha-735960 dockerd[1190]: time="2024-07-01T12:22:24.071877635Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 01 12:22:24 ha-735960 dockerd[1190]: time="2024-07-01T12:22:24.072398316Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 01 12:22:24 ha-735960 dockerd[1190]: time="2024-07-01T12:22:24.072506177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 12:22:24 ha-735960 dockerd[1190]: time="2024-07-01T12:22:24.072761669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 12:22:27 ha-735960 dockerd[1190]: time="2024-07-01T12:22:27.091757274Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 01 12:22:27 ha-735960 dockerd[1190]: time="2024-07-01T12:22:27.091819785Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 01 12:22:27 ha-735960 dockerd[1190]: time="2024-07-01T12:22:27.091834055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 12:22:27 ha-735960 dockerd[1190]: time="2024-07-01T12:22:27.092367194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 12:22:47 ha-735960 dockerd[1184]: time="2024-07-01T12:22:47.577930706Z" level=info msg="ignoring event" container=e546c39248bc8ab525701bacc0a354650e0da853981a4624abd25fdba1c1ca30 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 01 12:22:47 ha-735960 dockerd[1190]: time="2024-07-01T12:22:47.578670317Z" level=info msg="shim disconnected" id=e546c39248bc8ab525701bacc0a354650e0da853981a4624abd25fdba1c1ca30 namespace=moby
	Jul 01 12:22:47 ha-735960 dockerd[1190]: time="2024-07-01T12:22:47.578983718Z" level=warning msg="cleaning up after shim disconnected" id=e546c39248bc8ab525701bacc0a354650e0da853981a4624abd25fdba1c1ca30 namespace=moby
	Jul 01 12:22:47 ha-735960 dockerd[1190]: time="2024-07-01T12:22:47.579585559Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 01 12:22:48 ha-735960 dockerd[1184]: time="2024-07-01T12:22:48.582829662Z" level=info msg="ignoring event" container=829fe19c75ce30a13d3a4e33a2e2a760477739dbe6f611f1b4e60d69d0444f4d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 01 12:22:48 ha-735960 dockerd[1190]: time="2024-07-01T12:22:48.583282892Z" level=info msg="shim disconnected" id=829fe19c75ce30a13d3a4e33a2e2a760477739dbe6f611f1b4e60d69d0444f4d namespace=moby
	Jul 01 12:22:48 ha-735960 dockerd[1190]: time="2024-07-01T12:22:48.584157023Z" level=warning msg="cleaning up after shim disconnected" id=829fe19c75ce30a13d3a4e33a2e2a760477739dbe6f611f1b4e60d69d0444f4d namespace=moby
	Jul 01 12:22:48 ha-735960 dockerd[1190]: time="2024-07-01T12:22:48.584285564Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	e546c39248bc8       56ce0fd9fb532                                                                                         35 seconds ago       Exited              kube-apiserver            2                   16dae930b4edb       kube-apiserver-ha-735960
	829fe19c75ce3       e874818b3caac                                                                                         38 seconds ago       Exited              kube-controller-manager   2                   5e2a9b91be69c       kube-controller-manager-ha-735960
	cecb3dd12e16e       38af8ddebf499                                                                                         About a minute ago   Running             kube-vip                  0                   8d1562fb4b8c3       kube-vip-ha-735960
	6a200a6b49020       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      1                   5b1097d48d724       etcd-ha-735960
	2d71437c5f06d       7820c83aa1394                                                                                         About a minute ago   Running             kube-scheduler            1                   fa7dea6a1b8bd       kube-scheduler-ha-735960
	14112a4d8f2cb       38af8ddebf499                                                                                         2 minutes ago        Exited              kube-vip                  1                   46ab74fdab7e2       kube-vip-ha-735960
	1ef6d9da6a9c5       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   4 minutes ago        Exited              busybox                   0                   1f5ccc7b0e655       busybox-fc5497c4f-pjfcw
	a9c30cd4b3455       cbb01a7bd410d                                                                                         6 minutes ago        Exited              coredns                   0                   7b4b4f7ec4b63       coredns-7db6d8ff4d-nk4lf
	769b0b8751350       cbb01a7bd410d                                                                                         6 minutes ago        Exited              coredns                   0                   7a349370d4f88       coredns-7db6d8ff4d-p4rtz
	97d58c94f3fdc       6e38f40d628db                                                                                         6 minutes ago        Exited              storage-provisioner       0                   9226633ad878a       storage-provisioner
	f472aef5302fd       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              6 minutes ago        Exited              kindnet-cni               0                   ab9c74a502295       kindnet-7f6hm
	6116abe6039dc       53c535741fb44                                                                                         6 minutes ago        Exited              kube-proxy                0                   da69191059798       kube-proxy-lphzn
	cb63d54411807       7820c83aa1394                                                                                         7 minutes ago        Exited              kube-scheduler            0                   19b6b0e6ed64e       kube-scheduler-ha-735960
	24c8926d2b31d       3861cfcd7c04c                                                                                         7 minutes ago        Exited              etcd                      0                   d3b914e19ca22       etcd-ha-735960
	
	
	==> coredns [769b0b875135] <==
	[INFO] 10.244.1.2:44221 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000082797s
	[INFO] 10.244.2.2:33797 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157729s
	[INFO] 10.244.2.2:52590 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004055351s
	[INFO] 10.244.2.2:46983 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003253494s
	[INFO] 10.244.2.2:56187 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000205215s
	[INFO] 10.244.2.2:41086 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000158307s
	[INFO] 10.244.0.4:47783 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000097077s
	[INFO] 10.244.0.4:50743 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001523s
	[INFO] 10.244.0.4:37141 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000138763s
	[INFO] 10.244.1.2:32981 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000132906s
	[INFO] 10.244.1.2:36762 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001646552s
	[INFO] 10.244.1.2:33583 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000072434s
	[INFO] 10.244.2.2:37027 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000156518s
	[INFO] 10.244.2.2:58435 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000104504s
	[INFO] 10.244.2.2:36107 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090251s
	[INFO] 10.244.0.4:44792 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000227164s
	[INFO] 10.244.0.4:56557 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000140925s
	[INFO] 10.244.1.2:38284 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000232717s
	[INFO] 10.244.2.2:37664 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135198s
	[INFO] 10.244.2.2:60876 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00032392s
	[INFO] 10.244.1.2:37461 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133264s
	[INFO] 10.244.1.2:45182 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000117372s
	[INFO] 10.244.1.2:37156 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000240093s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a9c30cd4b345] <==
	[INFO] 10.244.0.4:57095 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002251804s
	[INFO] 10.244.0.4:42381 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000081215s
	[INFO] 10.244.0.4:53499 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00124929s
	[INFO] 10.244.0.4:41287 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000174281s
	[INFO] 10.244.0.4:36433 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000142863s
	[INFO] 10.244.1.2:47688 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130034s
	[INFO] 10.244.1.2:40562 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00183587s
	[INFO] 10.244.1.2:35137 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000771s
	[INFO] 10.244.1.2:37798 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000184282s
	[INFO] 10.244.1.2:43876 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008807s
	[INFO] 10.244.2.2:35039 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119303s
	[INFO] 10.244.0.4:53229 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000090292s
	[INFO] 10.244.0.4:42097 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00011308s
	[INFO] 10.244.1.2:42114 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130767s
	[INFO] 10.244.1.2:56638 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000110707s
	[INFO] 10.244.1.2:55805 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000093484s
	[INFO] 10.244.2.2:51675 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000145117s
	[INFO] 10.244.2.2:56838 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000136843s
	[INFO] 10.244.0.4:60951 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000162889s
	[INFO] 10.244.0.4:34776 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000112367s
	[INFO] 10.244.0.4:45397 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000073771s
	[INFO] 10.244.0.4:52372 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000058127s
	[INFO] 10.244.1.2:41033 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000131962s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0701 12:23:02.942397    2957 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
	E0701 12:23:02.942978    2957 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
	E0701 12:23:02.944650    2957 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
	E0701 12:23:02.945088    2957 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
	E0701 12:23:02.946699    2957 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul 1 12:21] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050877] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036108] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.421397] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.628587] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.463440] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +4.322115] systemd-fstab-generator[476]: Ignoring "noauto" option for root device
	[  +0.057798] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060958] systemd-fstab-generator[488]: Ignoring "noauto" option for root device
	[  +2.352578] systemd-fstab-generator[1113]: Ignoring "noauto" option for root device
	[  +0.297044] systemd-fstab-generator[1150]: Ignoring "noauto" option for root device
	[  +0.121689] systemd-fstab-generator[1162]: Ignoring "noauto" option for root device
	[  +0.127513] systemd-fstab-generator[1176]: Ignoring "noauto" option for root device
	[  +2.293985] kauditd_printk_skb: 195 callbacks suppressed
	[  +0.325101] systemd-fstab-generator[1411]: Ignoring "noauto" option for root device
	[  +0.108851] systemd-fstab-generator[1423]: Ignoring "noauto" option for root device
	[  +0.138237] systemd-fstab-generator[1435]: Ignoring "noauto" option for root device
	[  +0.156114] systemd-fstab-generator[1450]: Ignoring "noauto" option for root device
	[  +0.494872] systemd-fstab-generator[1603]: Ignoring "noauto" option for root device
	[  +6.977462] kauditd_printk_skb: 176 callbacks suppressed
	[ +11.291301] kauditd_printk_skb: 40 callbacks suppressed
	
	
	==> etcd [24c8926d2b31] <==
	{"level":"info","ts":"2024-07-01T12:21:01.297933Z","caller":"traceutil/trace.go:171","msg":"trace[249123960] range","detail":"{range_begin:/registry/leases/; range_end:/registry/leases0; }","duration":"4.106112275s","start":"2024-07-01T12:20:57.191803Z","end":"2024-07-01T12:21:01.297915Z","steps":["trace[249123960] 'agreement among raft nodes before linearized reading'  (duration: 4.10601913s)"],"step_count":1}
	{"level":"warn","ts":"2024-07-01T12:21:01.298006Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-01T12:20:57.191796Z","time spent":"4.106166982s","remote":"127.0.0.1:56240","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":0,"response size":0,"request content":"key:\"/registry/leases/\" range_end:\"/registry/leases0\" count_only:true "}
	2024/07/01 12:21:01 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/01 12:21:01 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/01 12:21:01 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-01T12:21:01.381902Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.16:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-01T12:21:01.38194Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.16:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-01T12:21:01.38203Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"b6c76b3131c1024","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-01T12:21:01.382382Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"c77bbbee62c21090"}
	{"level":"info","ts":"2024-07-01T12:21:01.382398Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"c77bbbee62c21090"}
	{"level":"info","ts":"2024-07-01T12:21:01.38247Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"c77bbbee62c21090"}
	{"level":"info","ts":"2024-07-01T12:21:01.382583Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b6c76b3131c1024","remote-peer-id":"c77bbbee62c21090"}
	{"level":"info","ts":"2024-07-01T12:21:01.382685Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b6c76b3131c1024","remote-peer-id":"c77bbbee62c21090"}
	{"level":"info","ts":"2024-07-01T12:21:01.382809Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b6c76b3131c1024","remote-peer-id":"c77bbbee62c21090"}
	{"level":"info","ts":"2024-07-01T12:21:01.382826Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"c77bbbee62c21090"}
	{"level":"info","ts":"2024-07-01T12:21:01.382832Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"77557cf66c24e9ff"}
	{"level":"info","ts":"2024-07-01T12:21:01.382882Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"77557cf66c24e9ff"}
	{"level":"info","ts":"2024-07-01T12:21:01.3829Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"77557cf66c24e9ff"}
	{"level":"info","ts":"2024-07-01T12:21:01.385706Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b6c76b3131c1024","remote-peer-id":"77557cf66c24e9ff"}
	{"level":"info","ts":"2024-07-01T12:21:01.385804Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b6c76b3131c1024","remote-peer-id":"77557cf66c24e9ff"}
	{"level":"info","ts":"2024-07-01T12:21:01.385838Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b6c76b3131c1024","remote-peer-id":"77557cf66c24e9ff"}
	{"level":"info","ts":"2024-07-01T12:21:01.385849Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"77557cf66c24e9ff"}
	{"level":"info","ts":"2024-07-01T12:21:01.406065Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.16:2380"}
	{"level":"info","ts":"2024-07-01T12:21:01.406193Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.16:2380"}
	{"level":"info","ts":"2024-07-01T12:21:01.406214Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-735960","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.16:2380"],"advertise-client-urls":["https://192.168.39.16:2379"]}
	
	
	==> etcd [6a200a6b4902] <==
	{"level":"info","ts":"2024-07-01T12:22:57.488845Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-01T12:22:57.488894Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-01T12:22:57.488907Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 received MsgPreVoteResp from b6c76b3131c1024 at term 2"}
	{"level":"info","ts":"2024-07-01T12:22:57.488922Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 [logterm: 2, index: 1880] sent MsgPreVote request to 77557cf66c24e9ff at term 2"}
	{"level":"info","ts":"2024-07-01T12:22:57.488929Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 [logterm: 2, index: 1880] sent MsgPreVote request to c77bbbee62c21090 at term 2"}
	{"level":"info","ts":"2024-07-01T12:22:58.888088Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-01T12:22:58.888193Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-01T12:22:58.888234Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 received MsgPreVoteResp from b6c76b3131c1024 at term 2"}
	{"level":"info","ts":"2024-07-01T12:22:58.888281Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 [logterm: 2, index: 1880] sent MsgPreVote request to 77557cf66c24e9ff at term 2"}
	{"level":"info","ts":"2024-07-01T12:22:58.888295Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 [logterm: 2, index: 1880] sent MsgPreVote request to c77bbbee62c21090 at term 2"}
	{"level":"warn","ts":"2024-07-01T12:22:59.811118Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c77bbbee62c21090","rtt":"0s","error":"dial tcp 192.168.39.86:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-01T12:22:59.811186Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"c77bbbee62c21090","rtt":"0s","error":"dial tcp 192.168.39.86:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-01T12:22:59.827782Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"77557cf66c24e9ff","rtt":"0s","error":"dial tcp 192.168.39.97:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-07-01T12:22:59.82782Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"77557cf66c24e9ff","rtt":"0s","error":"dial tcp 192.168.39.97:2380: connect: no route to host"}
	{"level":"info","ts":"2024-07-01T12:23:00.288491Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:00.288559Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:00.288572Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 received MsgPreVoteResp from b6c76b3131c1024 at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:00.288586Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 [logterm: 2, index: 1880] sent MsgPreVote request to 77557cf66c24e9ff at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:00.288593Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 [logterm: 2, index: 1880] sent MsgPreVote request to c77bbbee62c21090 at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:01.688306Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:01.688588Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:01.68866Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 received MsgPreVoteResp from b6c76b3131c1024 at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:01.688697Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 [logterm: 2, index: 1880] sent MsgPreVote request to 77557cf66c24e9ff at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:01.688777Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 [logterm: 2, index: 1880] sent MsgPreVote request to c77bbbee62c21090 at term 2"}
	{"level":"warn","ts":"2024-07-01T12:23:01.767286Z","caller":"etcdserver/server.go:2089","msg":"failed to publish local member to cluster through raft","local-member-id":"b6c76b3131c1024","local-member-attributes":"{Name:ha-735960 ClientURLs:[https://192.168.39.16:2379]}","request-path":"/0/members/b6c76b3131c1024/attributes","publish-timeout":"7s","error":"etcdserver: request timed out"}
	
	
	==> kernel <==
	 12:23:03 up 1 min,  0 users,  load average: 0.13, 0.07, 0.02
	Linux ha-735960 5.10.207 #1 SMP Wed Jun 26 19:37:34 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [f472aef5302f] <==
	I0701 12:20:12.428842       1 main.go:250] Node ha-735960-m04 has CIDR [10.244.3.0/24] 
	I0701 12:20:22.443154       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0701 12:20:22.443292       1 main.go:227] handling current node
	I0701 12:20:22.443323       1 main.go:223] Handling node with IPs: map[192.168.39.86:{}]
	I0701 12:20:22.443388       1 main.go:250] Node ha-735960-m02 has CIDR [10.244.1.0/24] 
	I0701 12:20:22.443605       1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
	I0701 12:20:22.443653       1 main.go:250] Node ha-735960-m03 has CIDR [10.244.2.0/24] 
	I0701 12:20:22.443793       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0701 12:20:22.443836       1 main.go:250] Node ha-735960-m04 has CIDR [10.244.3.0/24] 
	I0701 12:20:32.451395       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0701 12:20:32.451431       1 main.go:227] handling current node
	I0701 12:20:32.451481       1 main.go:223] Handling node with IPs: map[192.168.39.86:{}]
	I0701 12:20:32.451486       1 main.go:250] Node ha-735960-m02 has CIDR [10.244.1.0/24] 
	I0701 12:20:32.451947       1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
	I0701 12:20:32.451980       1 main.go:250] Node ha-735960-m03 has CIDR [10.244.2.0/24] 
	I0701 12:20:32.452873       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0701 12:20:32.453015       1 main.go:250] Node ha-735960-m04 has CIDR [10.244.3.0/24] 
	I0701 12:20:42.470169       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0701 12:20:42.470264       1 main.go:227] handling current node
	I0701 12:20:42.470289       1 main.go:223] Handling node with IPs: map[192.168.39.86:{}]
	I0701 12:20:42.470302       1 main.go:250] Node ha-735960-m02 has CIDR [10.244.1.0/24] 
	I0701 12:20:42.470523       1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
	I0701 12:20:42.470616       1 main.go:250] Node ha-735960-m03 has CIDR [10.244.2.0/24] 
	I0701 12:20:42.470868       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0701 12:20:42.470914       1 main.go:250] Node ha-735960-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [e546c39248bc] <==
	I0701 12:22:27.228496       1 options.go:221] external host was not specified, using 192.168.39.16
	I0701 12:22:27.229584       1 server.go:148] Version: v1.30.2
	I0701 12:22:27.229706       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0701 12:22:27.544729       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0701 12:22:27.547846       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0701 12:22:27.551600       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0701 12:22:27.551634       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0701 12:22:27.551982       1 instance.go:299] Using reconciler: lease
	W0701 12:22:47.544372       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0701 12:22:47.544664       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0701 12:22:47.553171       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [829fe19c75ce] <==
	I0701 12:22:24.521097       1 serving.go:380] Generated self-signed cert in-memory
	I0701 12:22:24.837441       1 controllermanager.go:189] "Starting" version="v1.30.2"
	I0701 12:22:24.837478       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0701 12:22:24.839276       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0701 12:22:24.839470       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0701 12:22:24.839988       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0701 12:22:24.840049       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0701 12:22:48.561111       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.16:8443/healthz\": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:57228->192.168.39.16:8443: read: connection reset by peer"
	
	
	==> kube-proxy [6116abe6039d] <==
	I0701 12:16:09.205590       1 server_linux.go:69] "Using iptables proxy"
	I0701 12:16:09.223098       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.16"]
	I0701 12:16:09.284088       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0701 12:16:09.284134       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0701 12:16:09.284152       1 server_linux.go:165] "Using iptables Proxier"
	I0701 12:16:09.286802       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0701 12:16:09.287240       1 server.go:872] "Version info" version="v1.30.2"
	I0701 12:16:09.287274       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0701 12:16:09.288803       1 config.go:192] "Starting service config controller"
	I0701 12:16:09.288830       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0701 12:16:09.289262       1 config.go:101] "Starting endpoint slice config controller"
	I0701 12:16:09.289283       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0701 12:16:09.290101       1 config.go:319] "Starting node config controller"
	I0701 12:16:09.290125       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0701 12:16:09.389941       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0701 12:16:09.390030       1 shared_informer.go:320] Caches are synced for service config
	I0701 12:16:09.390393       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2d71437c5f06] <==
	E0701 12:22:48.563438       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.16:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:57242->192.168.39.16:8443: read: connection reset by peer
	W0701 12:22:48.563506       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.16:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:59182->192.168.39.16:8443: read: connection reset by peer
	E0701 12:22:48.563570       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.16:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:59182->192.168.39.16:8443: read: connection reset by peer
	W0701 12:22:48.563641       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.16:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:59186->192.168.39.16:8443: read: connection reset by peer
	E0701 12:22:48.563665       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.16:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:59186->192.168.39.16:8443: read: connection reset by peer
	W0701 12:22:48.563724       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.16:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:59188->192.168.39.16:8443: read: connection reset by peer
	E0701 12:22:48.563747       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.16:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:59188->192.168.39.16:8443: read: connection reset by peer
	W0701 12:22:48.563814       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.16:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:59202->192.168.39.16:8443: read: connection reset by peer
	E0701 12:22:48.563830       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.16:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:59202->192.168.39.16:8443: read: connection reset by peer
	W0701 12:22:48.563886       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.16:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:59238->192.168.39.16:8443: read: connection reset by peer
	E0701 12:22:48.563907       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.16:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:59238->192.168.39.16:8443: read: connection reset by peer
	W0701 12:22:48.563967       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.16:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:59262->192.168.39.16:8443: read: connection reset by peer
	E0701 12:22:48.563982       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.16:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:59262->192.168.39.16:8443: read: connection reset by peer
	W0701 12:22:48.563997       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.16:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:59210->192.168.39.16:8443: read: connection reset by peer
	E0701 12:22:48.564229       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.16:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:59210->192.168.39.16:8443: read: connection reset by peer
	W0701 12:22:48.669137       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.16:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	E0701 12:22:48.669192       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.16:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	W0701 12:22:51.792652       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.16:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	E0701 12:22:51.792757       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.16:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	W0701 12:22:52.248014       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.16:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	E0701 12:22:52.248063       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.16:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	W0701 12:22:55.201032       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.16:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	E0701 12:22:55.201141       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.16:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	W0701 12:23:02.474045       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.16:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	E0701 12:23:02.474098       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.16:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	
	
	==> kube-scheduler [cb63d5441180] <==
	W0701 12:15:50.916180       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0701 12:15:50.916379       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0701 12:15:51.752711       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0701 12:15:51.752853       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0701 12:15:51.794007       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0701 12:15:51.794055       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0701 12:15:51.931391       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0701 12:15:51.931434       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0701 12:15:51.950120       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0701 12:15:51.950162       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0701 12:15:51.968922       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0701 12:15:51.969125       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0701 12:15:51.985991       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0701 12:15:51.986032       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0701 12:15:52.054298       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0701 12:15:52.054329       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0701 12:15:52.260873       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0701 12:15:52.260979       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0701 12:15:54.206866       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0701 12:19:09.710917       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-xv95g\": pod kube-proxy-xv95g is already assigned to node \"ha-735960-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-xv95g" node="ha-735960-m04"
	E0701 12:19:09.713930       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-xv95g\": pod kube-proxy-xv95g is already assigned to node \"ha-735960-m04\"" pod="kube-system/kube-proxy-xv95g"
	I0701 12:21:01.200143       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0701 12:21:01.200254       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0701 12:21:01.200659       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0701 12:21:01.212693       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 01 12:22:49 ha-735960 kubelet[1610]: E0701 12:22:49.182112    1610 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-735960_kube-system(9a545edc3c0d885e2370d3a24ff8ac4b)\"" pod="kube-system/kube-controller-manager-ha-735960" podUID="9a545edc3c0d885e2370d3a24ff8ac4b"
	Jul 01 12:22:50 ha-735960 kubelet[1610]: I0701 12:22:50.089167    1610 scope.go:117] "RemoveContainer" containerID="e546c39248bc8ab525701bacc0a354650e0da853981a4624abd25fdba1c1ca30"
	Jul 01 12:22:50 ha-735960 kubelet[1610]: E0701 12:22:50.089722    1610 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-735960_kube-system(858bfcad8b1d02b8cdc3dc83c4af060c)\"" pod="kube-system/kube-apiserver-ha-735960" podUID="858bfcad8b1d02b8cdc3dc83c4af060c"
	Jul 01 12:22:50 ha-735960 kubelet[1610]: I0701 12:22:50.202365    1610 scope.go:117] "RemoveContainer" containerID="829fe19c75ce30a13d3a4e33a2e2a760477739dbe6f611f1b4e60d69d0444f4d"
	Jul 01 12:22:50 ha-735960 kubelet[1610]: E0701 12:22:50.202700    1610 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-735960_kube-system(9a545edc3c0d885e2370d3a24ff8ac4b)\"" pod="kube-system/kube-controller-manager-ha-735960" podUID="9a545edc3c0d885e2370d3a24ff8ac4b"
	Jul 01 12:22:51 ha-735960 kubelet[1610]: I0701 12:22:51.209935    1610 scope.go:117] "RemoveContainer" containerID="829fe19c75ce30a13d3a4e33a2e2a760477739dbe6f611f1b4e60d69d0444f4d"
	Jul 01 12:22:51 ha-735960 kubelet[1610]: E0701 12:22:51.210647    1610 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-735960_kube-system(9a545edc3c0d885e2370d3a24ff8ac4b)\"" pod="kube-system/kube-controller-manager-ha-735960" podUID="9a545edc3c0d885e2370d3a24ff8ac4b"
	Jul 01 12:22:51 ha-735960 kubelet[1610]: I0701 12:22:51.576067    1610 kubelet_node_status.go:73] "Attempting to register node" node="ha-735960"
	Jul 01 12:22:53 ha-735960 kubelet[1610]: I0701 12:22:53.728933    1610 scope.go:117] "RemoveContainer" containerID="e546c39248bc8ab525701bacc0a354650e0da853981a4624abd25fdba1c1ca30"
	Jul 01 12:22:53 ha-735960 kubelet[1610]: E0701 12:22:53.729329    1610 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-735960_kube-system(858bfcad8b1d02b8cdc3dc83c4af060c)\"" pod="kube-system/kube-apiserver-ha-735960" podUID="858bfcad8b1d02b8cdc3dc83c4af060c"
	Jul 01 12:22:53 ha-735960 kubelet[1610]: E0701 12:22:53.789831    1610 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.254:8443: connect: no route to host" node="ha-735960"
	Jul 01 12:22:53 ha-735960 kubelet[1610]: E0701 12:22:53.790000    1610 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-735960?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host" interval="7s"
	Jul 01 12:22:56 ha-735960 kubelet[1610]: W0701 12:22:56.862031    1610 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	Jul 01 12:22:56 ha-735960 kubelet[1610]: E0701 12:22:56.862122    1610 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	Jul 01 12:22:57 ha-735960 kubelet[1610]: E0701 12:22:57.094040    1610 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-735960\" not found"
	Jul 01 12:22:59 ha-735960 kubelet[1610]: W0701 12:22:59.934973    1610 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	Jul 01 12:22:59 ha-735960 kubelet[1610]: E0701 12:22:59.935046    1610 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	Jul 01 12:22:59 ha-735960 kubelet[1610]: W0701 12:22:59.935096    1610 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-735960&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	Jul 01 12:22:59 ha-735960 kubelet[1610]: E0701 12:22:59.935120    1610 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-735960&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	Jul 01 12:22:59 ha-735960 kubelet[1610]: E0701 12:22:59.935170    1610 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host" event="&Event{ObjectMeta:{ha-735960.17de162e90ad8f5f  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-735960,UID:ha-735960,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ha-735960,},FirstTimestamp:2024-07-01 12:21:36.953708383 +0000 UTC m=+0.183371310,LastTimestamp:2024-07-01 12:21:36.953708383 +0000 UTC m=+0.183371310,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-735960,}"
	Jul 01 12:23:00 ha-735960 kubelet[1610]: I0701 12:23:00.791239    1610 kubelet_node_status.go:73] "Attempting to register node" node="ha-735960"
	Jul 01 12:23:03 ha-735960 kubelet[1610]: W0701 12:23:03.006011    1610 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	Jul 01 12:23:03 ha-735960 kubelet[1610]: E0701 12:23:03.006051    1610 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.254:8443: connect: no route to host" node="ha-735960"
	Jul 01 12:23:03 ha-735960 kubelet[1610]: E0701 12:23:03.006090    1610 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	Jul 01 12:23:03 ha-735960 kubelet[1610]: E0701 12:23:03.006162    1610 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-735960?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host" interval="7s"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-735960 -n ha-735960
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-735960 -n ha-735960: exit status 2 (230.835771ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-735960" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (58.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 stop -v=7 --alsologtostderr
E0701 12:23:22.863651  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/addons-877411/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-735960 stop -v=7 --alsologtostderr: (58.805260324s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-735960 status -v=7 --alsologtostderr: exit status 7 (123.611874ms)

                                                
                                                
-- stdout --
	ha-735960
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-735960-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-735960-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-735960-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 12:24:02.377383  653477 out.go:291] Setting OutFile to fd 1 ...
	I0701 12:24:02.377503  653477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 12:24:02.377511  653477 out.go:304] Setting ErrFile to fd 2...
	I0701 12:24:02.377515  653477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 12:24:02.377696  653477 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19166-630650/.minikube/bin
	I0701 12:24:02.377860  653477 out.go:298] Setting JSON to false
	I0701 12:24:02.377888  653477 mustload.go:65] Loading cluster: ha-735960
	I0701 12:24:02.378005  653477 notify.go:220] Checking for updates...
	I0701 12:24:02.378265  653477 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:24:02.378282  653477 status.go:255] checking status of ha-735960 ...
	I0701 12:24:02.378722  653477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:24:02.378779  653477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:24:02.400342  653477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43529
	I0701 12:24:02.400850  653477 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:24:02.401487  653477 main.go:141] libmachine: Using API Version  1
	I0701 12:24:02.401520  653477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:24:02.401880  653477 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:24:02.402068  653477 main.go:141] libmachine: (ha-735960) Calling .GetState
	I0701 12:24:02.403726  653477 status.go:330] ha-735960 host status = "Stopped" (err=<nil>)
	I0701 12:24:02.403738  653477 status.go:343] host is not running, skipping remaining checks
	I0701 12:24:02.403745  653477 status.go:257] ha-735960 status: &{Name:ha-735960 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0701 12:24:02.403769  653477 status.go:255] checking status of ha-735960-m02 ...
	I0701 12:24:02.404052  653477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:24:02.404105  653477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:24:02.419410  653477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35849
	I0701 12:24:02.419794  653477 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:24:02.420207  653477 main.go:141] libmachine: Using API Version  1
	I0701 12:24:02.420229  653477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:24:02.420601  653477 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:24:02.420790  653477 main.go:141] libmachine: (ha-735960-m02) Calling .GetState
	I0701 12:24:02.422249  653477 status.go:330] ha-735960-m02 host status = "Stopped" (err=<nil>)
	I0701 12:24:02.422267  653477 status.go:343] host is not running, skipping remaining checks
	I0701 12:24:02.422275  653477 status.go:257] ha-735960-m02 status: &{Name:ha-735960-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0701 12:24:02.422295  653477 status.go:255] checking status of ha-735960-m03 ...
	I0701 12:24:02.422621  653477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:24:02.422656  653477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:24:02.437366  653477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42733
	I0701 12:24:02.437735  653477 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:24:02.438236  653477 main.go:141] libmachine: Using API Version  1
	I0701 12:24:02.438255  653477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:24:02.438601  653477 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:24:02.438796  653477 main.go:141] libmachine: (ha-735960-m03) Calling .GetState
	I0701 12:24:02.440138  653477 status.go:330] ha-735960-m03 host status = "Stopped" (err=<nil>)
	I0701 12:24:02.440163  653477 status.go:343] host is not running, skipping remaining checks
	I0701 12:24:02.440181  653477 status.go:257] ha-735960-m03 status: &{Name:ha-735960-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0701 12:24:02.440205  653477 status.go:255] checking status of ha-735960-m04 ...
	I0701 12:24:02.440484  653477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:24:02.440534  653477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:24:02.455215  653477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34791
	I0701 12:24:02.455658  653477 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:24:02.456174  653477 main.go:141] libmachine: Using API Version  1
	I0701 12:24:02.456201  653477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:24:02.456521  653477 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:24:02.456702  653477 main.go:141] libmachine: (ha-735960-m04) Calling .GetState
	I0701 12:24:02.458485  653477 status.go:330] ha-735960-m04 host status = "Stopped" (err=<nil>)
	I0701 12:24:02.458502  653477 status.go:343] host is not running, skipping remaining checks
	I0701 12:24:02.458509  653477 status.go:257] ha-735960-m04 status: &{Name:ha-735960-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-735960 status -v=7 --alsologtostderr": ha-735960
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-735960-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-735960-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-735960-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-735960 status -v=7 --alsologtostderr": ha-735960
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-735960-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-735960-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-735960-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-linux-amd64 -p ha-735960 status -v=7 --alsologtostderr": ha-735960
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-735960-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-735960-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-735960-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-735960 -n ha-735960
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-735960 -n ha-735960: exit status 7 (63.304055ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-735960" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (58.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (217.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-735960 --wait=true -v=7 --alsologtostderr --driver=kvm2 
E0701 12:24:11.881073  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/functional-377045/client.crt: no such file or directory
E0701 12:24:39.565295  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/functional-377045/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-735960 --wait=true -v=7 --alsologtostderr --driver=kvm2 : (3m33.688504069s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 status -v=7 --alsologtostderr
ha_test.go:571: status says not two control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-735960 status -v=7 --alsologtostderr": ha-735960
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-735960-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-735960-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-735960-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:574: status says not three hosts are running: args "out/minikube-linux-amd64 -p ha-735960 status -v=7 --alsologtostderr": ha-735960
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-735960-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-735960-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-735960-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:577: status says not three kubelets are running: args "out/minikube-linux-amd64 -p ha-735960 status -v=7 --alsologtostderr": ha-735960
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-735960-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-735960-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-735960-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:580: status says not two apiservers are running: args "out/minikube-linux-amd64 -p ha-735960 status -v=7 --alsologtostderr": ha-735960
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-735960-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-735960-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-735960-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
ha_test.go:597: expected 3 nodes Ready status to be True, got 
-- stdout --
	' True
	 True
	 True
	 True
	'

                                                
                                                
-- /stdout --
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-735960 -n ha-735960
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-735960 logs -n 25: (1.650438403s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-735960 cp ha-735960-m03:/home/docker/cp-test.txt                              | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m04:/home/docker/cp-test_ha-735960-m03_ha-735960-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n                                                                 | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n ha-735960-m04 sudo cat                                          | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | /home/docker/cp-test_ha-735960-m03_ha-735960-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-735960 cp testdata/cp-test.txt                                                | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n                                                                 | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-735960 cp ha-735960-m04:/home/docker/cp-test.txt                              | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2826819896/001/cp-test_ha-735960-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n                                                                 | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-735960 cp ha-735960-m04:/home/docker/cp-test.txt                              | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960:/home/docker/cp-test_ha-735960-m04_ha-735960.txt                       |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n                                                                 | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n ha-735960 sudo cat                                              | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | /home/docker/cp-test_ha-735960-m04_ha-735960.txt                                 |           |         |         |                     |                     |
	| cp      | ha-735960 cp ha-735960-m04:/home/docker/cp-test.txt                              | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m02:/home/docker/cp-test_ha-735960-m04_ha-735960-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n                                                                 | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n ha-735960-m02 sudo cat                                          | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | /home/docker/cp-test_ha-735960-m04_ha-735960-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-735960 cp ha-735960-m04:/home/docker/cp-test.txt                              | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m03:/home/docker/cp-test_ha-735960-m04_ha-735960-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n                                                                 | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n ha-735960-m03 sudo cat                                          | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | /home/docker/cp-test_ha-735960-m04_ha-735960-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-735960 node stop m02 -v=7                                                     | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-735960 node start m02 -v=7                                                    | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:20 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-735960 -v=7                                                           | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:20 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-735960 -v=7                                                                | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:20 UTC | 01 Jul 24 12:21 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-735960 --wait=true -v=7                                                    | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:21 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-735960                                                                | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:22 UTC |                     |
	| node    | ha-735960 node delete m03 -v=7                                                   | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:22 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-735960 stop -v=7                                                              | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:23 UTC | 01 Jul 24 12:24 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-735960 --wait=true                                                         | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:24 UTC | 01 Jul 24 12:27 UTC |
	|         | -v=7 --alsologtostderr                                                           |           |         |         |                     |                     |
	|         | --driver=kvm2                                                                    |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/01 12:24:02
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0701 12:24:02.565321  653531 out.go:291] Setting OutFile to fd 1 ...
	I0701 12:24:02.565576  653531 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 12:24:02.565584  653531 out.go:304] Setting ErrFile to fd 2...
	I0701 12:24:02.565588  653531 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 12:24:02.565782  653531 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19166-630650/.minikube/bin
	I0701 12:24:02.566304  653531 out.go:298] Setting JSON to false
	I0701 12:24:02.567248  653531 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":7581,"bootTime":1719829062,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0701 12:24:02.567318  653531 start.go:139] virtualization: kvm guest
	I0701 12:24:02.569903  653531 out.go:177] * [ha-735960] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0701 12:24:02.571307  653531 notify.go:220] Checking for updates...
	I0701 12:24:02.571336  653531 out.go:177]   - MINIKUBE_LOCATION=19166
	I0701 12:24:02.572748  653531 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 12:24:02.574111  653531 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19166-630650/kubeconfig
	I0701 12:24:02.575333  653531 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19166-630650/.minikube
	I0701 12:24:02.576670  653531 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0701 12:24:02.578040  653531 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 12:24:02.579691  653531 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:24:02.580063  653531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:24:02.580118  653531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:24:02.595084  653531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46077
	I0701 12:24:02.595523  653531 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:24:02.596065  653531 main.go:141] libmachine: Using API Version  1
	I0701 12:24:02.596090  653531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:24:02.596376  653531 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:24:02.596591  653531 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:24:02.596798  653531 driver.go:392] Setting default libvirt URI to qemu:///system
	I0701 12:24:02.597091  653531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:24:02.597140  653531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:24:02.611685  653531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43891
	I0701 12:24:02.612062  653531 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:24:02.612574  653531 main.go:141] libmachine: Using API Version  1
	I0701 12:24:02.612596  653531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:24:02.612886  653531 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:24:02.613060  653531 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:24:02.647232  653531 out.go:177] * Using the kvm2 driver based on existing profile
	I0701 12:24:02.648606  653531 start.go:297] selected driver: kvm2
	I0701 12:24:02.648624  653531 start.go:901] validating driver "kvm2" against &{Name:ha-735960 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.2 ClusterName:ha-735960 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.60 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storagecla
ss:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 12:24:02.648774  653531 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 12:24:02.649109  653531 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:24:02.649176  653531 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19166-630650/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0701 12:24:02.663726  653531 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0701 12:24:02.664362  653531 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 12:24:02.664394  653531 cni.go:84] Creating CNI manager for ""
	I0701 12:24:02.664400  653531 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0701 12:24:02.664456  653531 start.go:340] cluster config:
	{Name:ha-735960 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-735960 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.60 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false h
elm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 12:24:02.664569  653531 iso.go:125] acquiring lock: {Name:mk5c70910f61bc270c83609c48670eaf9d7e0602 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:24:02.666644  653531 out.go:177] * Starting "ha-735960" primary control-plane node in "ha-735960" cluster
	I0701 12:24:02.667913  653531 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 12:24:02.667956  653531 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19166-630650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0701 12:24:02.667963  653531 cache.go:56] Caching tarball of preloaded images
	I0701 12:24:02.668051  653531 preload.go:173] Found /home/jenkins/minikube-integration/19166-630650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0701 12:24:02.668065  653531 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0701 12:24:02.668178  653531 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
	I0701 12:24:02.668362  653531 start.go:360] acquireMachinesLock for ha-735960: {Name:mk43a6c0c0c15a237623df377ad65b82186c0ad8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:24:02.668420  653531 start.go:364] duration metric: took 37.459µs to acquireMachinesLock for "ha-735960"
	I0701 12:24:02.668440  653531 start.go:96] Skipping create...Using existing machine configuration
	I0701 12:24:02.668454  653531 fix.go:54] fixHost starting: 
	I0701 12:24:02.668711  653531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:24:02.668747  653531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:24:02.682861  653531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39713
	I0701 12:24:02.683321  653531 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:24:02.683791  653531 main.go:141] libmachine: Using API Version  1
	I0701 12:24:02.683812  653531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:24:02.684145  653531 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:24:02.684389  653531 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:24:02.684573  653531 main.go:141] libmachine: (ha-735960) Calling .GetState
	I0701 12:24:02.686019  653531 fix.go:112] recreateIfNeeded on ha-735960: state=Stopped err=<nil>
	I0701 12:24:02.686043  653531 main.go:141] libmachine: (ha-735960) Calling .DriverName
	W0701 12:24:02.686187  653531 fix.go:138] unexpected machine state, will restart: <nil>
	I0701 12:24:02.688339  653531 out.go:177] * Restarting existing kvm2 VM for "ha-735960" ...
	I0701 12:24:02.690004  653531 main.go:141] libmachine: (ha-735960) Calling .Start
	I0701 12:24:02.690210  653531 main.go:141] libmachine: (ha-735960) Ensuring networks are active...
	I0701 12:24:02.690928  653531 main.go:141] libmachine: (ha-735960) Ensuring network default is active
	I0701 12:24:02.691237  653531 main.go:141] libmachine: (ha-735960) Ensuring network mk-ha-735960 is active
	I0701 12:24:02.691618  653531 main.go:141] libmachine: (ha-735960) Getting domain xml...
	I0701 12:24:02.692321  653531 main.go:141] libmachine: (ha-735960) Creating domain...
	I0701 12:24:03.888996  653531 main.go:141] libmachine: (ha-735960) Waiting to get IP...
	I0701 12:24:03.889967  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:03.890480  653531 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:24:03.890588  653531 main.go:141] libmachine: (ha-735960) DBG | I0701 12:24:03.890454  653582 retry.go:31] will retry after 276.532377ms: waiting for machine to come up
	I0701 12:24:04.169193  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:04.169696  653531 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:24:04.169722  653531 main.go:141] libmachine: (ha-735960) DBG | I0701 12:24:04.169655  653582 retry.go:31] will retry after 379.701447ms: waiting for machine to come up
	I0701 12:24:04.551325  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:04.551741  653531 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:24:04.551768  653531 main.go:141] libmachine: (ha-735960) DBG | I0701 12:24:04.551690  653582 retry.go:31] will retry after 390.796114ms: waiting for machine to come up
	I0701 12:24:04.944503  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:04.944879  653531 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:24:04.944907  653531 main.go:141] libmachine: (ha-735960) DBG | I0701 12:24:04.944824  653582 retry.go:31] will retry after 501.242083ms: waiting for machine to come up
	I0701 12:24:05.447754  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:05.448283  653531 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:24:05.448315  653531 main.go:141] libmachine: (ha-735960) DBG | I0701 12:24:05.448261  653582 retry.go:31] will retry after 739.761709ms: waiting for machine to come up
	I0701 12:24:06.189145  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:06.189602  653531 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:24:06.189631  653531 main.go:141] libmachine: (ha-735960) DBG | I0701 12:24:06.189545  653582 retry.go:31] will retry after 652.97975ms: waiting for machine to come up
	I0701 12:24:06.844427  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:06.844894  653531 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:24:06.844917  653531 main.go:141] libmachine: (ha-735960) DBG | I0701 12:24:06.844845  653582 retry.go:31] will retry after 1.122975762s: waiting for machine to come up
	I0701 12:24:07.969893  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:07.970374  653531 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:24:07.970427  653531 main.go:141] libmachine: (ha-735960) DBG | I0701 12:24:07.970304  653582 retry.go:31] will retry after 933.604302ms: waiting for machine to come up
	I0701 12:24:08.905636  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:08.905959  653531 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:24:08.905983  653531 main.go:141] libmachine: (ha-735960) DBG | I0701 12:24:08.905909  653582 retry.go:31] will retry after 1.753153445s: waiting for machine to come up
	I0701 12:24:10.662098  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:10.662553  653531 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:24:10.662622  653531 main.go:141] libmachine: (ha-735960) DBG | I0701 12:24:10.662537  653582 retry.go:31] will retry after 1.625060377s: waiting for machine to come up
	I0701 12:24:12.290368  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:12.290788  653531 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:24:12.290822  653531 main.go:141] libmachine: (ha-735960) DBG | I0701 12:24:12.290695  653582 retry.go:31] will retry after 2.741972388s: waiting for machine to come up
	I0701 12:24:15.036161  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:15.036634  653531 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:24:15.036661  653531 main.go:141] libmachine: (ha-735960) DBG | I0701 12:24:15.036581  653582 retry.go:31] will retry after 3.113034425s: waiting for machine to come up
	I0701 12:24:18.151534  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:18.152048  653531 main.go:141] libmachine: (ha-735960) Found IP for machine: 192.168.39.16
	I0701 12:24:18.152074  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has current primary IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:18.152083  653531 main.go:141] libmachine: (ha-735960) Reserving static IP address...
	I0701 12:24:18.152579  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "ha-735960", mac: "52:54:00:6c:20:7c", ip: "192.168.39.16"} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:18.152611  653531 main.go:141] libmachine: (ha-735960) DBG | skip adding static IP to network mk-ha-735960 - found existing host DHCP lease matching {name: "ha-735960", mac: "52:54:00:6c:20:7c", ip: "192.168.39.16"}
	I0701 12:24:18.152626  653531 main.go:141] libmachine: (ha-735960) Reserved static IP address: 192.168.39.16
	I0701 12:24:18.152643  653531 main.go:141] libmachine: (ha-735960) Waiting for SSH to be available...
	I0701 12:24:18.152674  653531 main.go:141] libmachine: (ha-735960) DBG | Getting to WaitForSSH function...
	I0701 12:24:18.154511  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:18.154741  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:18.154760  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:18.154885  653531 main.go:141] libmachine: (ha-735960) DBG | Using SSH client type: external
	I0701 12:24:18.154912  653531 main.go:141] libmachine: (ha-735960) DBG | Using SSH private key: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa (-rw-------)
	I0701 12:24:18.154954  653531 main.go:141] libmachine: (ha-735960) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.16 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0701 12:24:18.154968  653531 main.go:141] libmachine: (ha-735960) DBG | About to run SSH command:
	I0701 12:24:18.154991  653531 main.go:141] libmachine: (ha-735960) DBG | exit 0
	I0701 12:24:18.274220  653531 main.go:141] libmachine: (ha-735960) DBG | SSH cmd err, output: <nil>: 
	I0701 12:24:18.274677  653531 main.go:141] libmachine: (ha-735960) Calling .GetConfigRaw
	I0701 12:24:18.275344  653531 main.go:141] libmachine: (ha-735960) Calling .GetIP
	I0701 12:24:18.277628  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:18.278085  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:18.278118  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:18.278447  653531 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
	I0701 12:24:18.278671  653531 machine.go:94] provisionDockerMachine start ...
	I0701 12:24:18.278694  653531 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:24:18.278956  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:24:18.281138  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:18.281565  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:18.281590  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:18.281697  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:24:18.281884  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:18.282084  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:18.282290  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:24:18.282484  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:24:18.282777  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0701 12:24:18.282790  653531 main.go:141] libmachine: About to run SSH command:
	hostname
	I0701 12:24:18.378249  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0701 12:24:18.378279  653531 main.go:141] libmachine: (ha-735960) Calling .GetMachineName
	I0701 12:24:18.378583  653531 buildroot.go:166] provisioning hostname "ha-735960"
	I0701 12:24:18.378614  653531 main.go:141] libmachine: (ha-735960) Calling .GetMachineName
	I0701 12:24:18.378869  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:24:18.381421  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:18.381789  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:18.381817  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:18.381949  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:24:18.382158  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:18.382297  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:18.382445  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:24:18.382576  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:24:18.382763  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0701 12:24:18.382780  653531 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-735960 && echo "ha-735960" | sudo tee /etc/hostname
	I0701 12:24:18.491369  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-735960
	
	I0701 12:24:18.491396  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:24:18.494039  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:18.494432  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:18.494460  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:18.494718  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:24:18.494939  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:18.495106  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:18.495259  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:24:18.495452  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:24:18.495675  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0701 12:24:18.495699  653531 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-735960' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-735960/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-735960' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0701 12:24:18.598595  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0701 12:24:18.598631  653531 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19166-630650/.minikube CaCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19166-630650/.minikube}
	I0701 12:24:18.598653  653531 buildroot.go:174] setting up certificates
	I0701 12:24:18.598662  653531 provision.go:84] configureAuth start
	I0701 12:24:18.598670  653531 main.go:141] libmachine: (ha-735960) Calling .GetMachineName
	I0701 12:24:18.598968  653531 main.go:141] libmachine: (ha-735960) Calling .GetIP
	I0701 12:24:18.601563  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:18.602005  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:18.602036  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:18.602215  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:24:18.604739  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:18.605246  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:18.605273  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:18.605427  653531 provision.go:143] copyHostCerts
	I0701 12:24:18.605458  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem
	I0701 12:24:18.605515  653531 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem, removing ...
	I0701 12:24:18.605523  653531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem
	I0701 12:24:18.605588  653531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem (1675 bytes)
	I0701 12:24:18.605671  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem
	I0701 12:24:18.605688  653531 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem, removing ...
	I0701 12:24:18.605695  653531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem
	I0701 12:24:18.605718  653531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem (1078 bytes)
	I0701 12:24:18.605772  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem
	I0701 12:24:18.605788  653531 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem, removing ...
	I0701 12:24:18.605794  653531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem
	I0701 12:24:18.605814  653531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem (1123 bytes)
	I0701 12:24:18.605871  653531 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem org=jenkins.ha-735960 san=[127.0.0.1 192.168.39.16 ha-735960 localhost minikube]
	I0701 12:24:19.079576  653531 provision.go:177] copyRemoteCerts
	I0701 12:24:19.079661  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0701 12:24:19.079696  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:24:19.082253  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:19.082610  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:19.082638  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:19.082786  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:24:19.082996  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:19.083179  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:24:19.083325  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa Username:docker}
	I0701 12:24:19.160543  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0701 12:24:19.160634  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0701 12:24:19.183871  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0701 12:24:19.183957  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0701 12:24:19.206811  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0701 12:24:19.206911  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0701 12:24:19.229160  653531 provision.go:87] duration metric: took 630.48062ms to configureAuth
	I0701 12:24:19.229197  653531 buildroot.go:189] setting minikube options for container-runtime
	I0701 12:24:19.229480  653531 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:24:19.229521  653531 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:24:19.229827  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:24:19.232595  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:19.233032  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:19.233062  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:19.233264  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:24:19.233514  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:19.233696  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:19.233834  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:24:19.234025  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:24:19.234222  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0701 12:24:19.234237  653531 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0701 12:24:19.331417  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0701 12:24:19.331446  653531 buildroot.go:70] root file system type: tmpfs
	I0701 12:24:19.331582  653531 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0701 12:24:19.331605  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:24:19.334269  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:19.334634  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:19.334660  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:19.334900  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:24:19.335107  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:19.335308  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:19.335479  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:24:19.335645  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:24:19.335809  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0701 12:24:19.335865  653531 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0701 12:24:19.443562  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0701 12:24:19.443592  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:24:19.446176  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:19.446524  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:19.446556  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:19.446723  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:24:19.446930  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:19.447105  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:19.447245  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:24:19.447408  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:24:19.447591  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0701 12:24:19.447611  653531 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0701 12:24:21.232310  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0701 12:24:21.232343  653531 machine.go:97] duration metric: took 2.953656212s to provisionDockerMachine
	I0701 12:24:21.232359  653531 start.go:293] postStartSetup for "ha-735960" (driver="kvm2")
	I0701 12:24:21.232371  653531 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0701 12:24:21.232390  653531 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:24:21.232744  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0701 12:24:21.232777  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:24:21.235119  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:21.235559  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:21.235584  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:21.235772  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:24:21.235940  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:21.236122  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:24:21.236248  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa Username:docker}
	I0701 12:24:21.313134  653531 ssh_runner.go:195] Run: cat /etc/os-release
	I0701 12:24:21.317084  653531 info.go:137] Remote host: Buildroot 2023.02.9
	I0701 12:24:21.317118  653531 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-630650/.minikube/addons for local assets ...
	I0701 12:24:21.317202  653531 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-630650/.minikube/files for local assets ...
	I0701 12:24:21.317295  653531 filesync.go:149] local asset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> 6378542.pem in /etc/ssl/certs
	I0701 12:24:21.317307  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> /etc/ssl/certs/6378542.pem
	I0701 12:24:21.317399  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0701 12:24:21.326681  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem --> /etc/ssl/certs/6378542.pem (1708 bytes)
	I0701 12:24:21.349306  653531 start.go:296] duration metric: took 116.926386ms for postStartSetup
	I0701 12:24:21.349360  653531 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:24:21.349703  653531 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0701 12:24:21.349739  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:24:21.352499  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:21.352917  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:21.352946  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:21.353148  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:24:21.353394  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:21.353561  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:24:21.353790  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa Username:docker}
	I0701 12:24:21.433784  653531 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0701 12:24:21.433859  653531 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0701 12:24:21.475659  653531 fix.go:56] duration metric: took 18.807194904s for fixHost
	I0701 12:24:21.475706  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:24:21.478623  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:21.479038  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:21.479071  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:21.479250  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:24:21.479467  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:21.479584  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:21.479702  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:24:21.479838  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:24:21.480034  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0701 12:24:21.480048  653531 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0701 12:24:21.586741  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719836661.563256683
	
	I0701 12:24:21.586770  653531 fix.go:216] guest clock: 1719836661.563256683
	I0701 12:24:21.586783  653531 fix.go:229] Guest: 2024-07-01 12:24:21.563256683 +0000 UTC Remote: 2024-07-01 12:24:21.475685785 +0000 UTC m=+18.945537438 (delta=87.570898ms)
	I0701 12:24:21.586836  653531 fix.go:200] guest clock delta is within tolerance: 87.570898ms
	I0701 12:24:21.586844  653531 start.go:83] releasing machines lock for "ha-735960", held for 18.918411663s
	I0701 12:24:21.586868  653531 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:24:21.587158  653531 main.go:141] libmachine: (ha-735960) Calling .GetIP
	I0701 12:24:21.589666  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:21.590034  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:21.590064  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:21.590216  653531 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:24:21.590761  653531 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:24:21.590954  653531 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:24:21.591048  653531 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0701 12:24:21.591096  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:24:21.591207  653531 ssh_runner.go:195] Run: cat /version.json
	I0701 12:24:21.591235  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:24:21.593711  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:21.593857  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:21.594066  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:21.594091  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:21.594278  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:24:21.594408  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:21.594432  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:21.594491  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:21.594596  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:24:21.594674  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:24:21.594780  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:21.594865  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa Username:docker}
	I0701 12:24:21.594903  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:24:21.595018  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa Username:docker}
	I0701 12:24:21.688196  653531 ssh_runner.go:195] Run: systemctl --version
	I0701 12:24:21.693743  653531 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0701 12:24:21.698823  653531 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0701 12:24:21.698901  653531 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0701 12:24:21.714364  653531 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0701 12:24:21.714404  653531 start.go:494] detecting cgroup driver to use...
	I0701 12:24:21.714572  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 12:24:21.734692  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0701 12:24:21.744599  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0701 12:24:21.754591  653531 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0701 12:24:21.754664  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0701 12:24:21.764718  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0701 12:24:21.774564  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0701 12:24:21.784516  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0701 12:24:21.794592  653531 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0701 12:24:21.804646  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0701 12:24:21.814497  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0701 12:24:21.824363  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0701 12:24:21.834566  653531 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0701 12:24:21.843852  653531 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0701 12:24:21.852939  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:24:21.959107  653531 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0701 12:24:21.981473  653531 start.go:494] detecting cgroup driver to use...
	I0701 12:24:21.981556  653531 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0701 12:24:21.995383  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 12:24:22.009843  653531 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0701 12:24:22.030755  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 12:24:22.043208  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 12:24:22.055774  653531 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0701 12:24:22.080888  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 12:24:22.093331  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 12:24:22.110088  653531 ssh_runner.go:195] Run: which cri-dockerd
	I0701 12:24:22.113487  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0701 12:24:22.121907  653531 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0701 12:24:22.137227  653531 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0701 12:24:22.245438  653531 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0701 12:24:22.351994  653531 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0701 12:24:22.352150  653531 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0701 12:24:22.368109  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:24:22.474388  653531 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0701 12:24:24.887396  653531 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.412956412s)
	I0701 12:24:24.887487  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0701 12:24:24.900113  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0701 12:24:24.912702  653531 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0701 12:24:25.020545  653531 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0701 12:24:25.134056  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:24:25.242294  653531 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0701 12:24:25.258251  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0701 12:24:25.270762  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:24:25.375199  653531 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0701 12:24:25.454939  653531 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0701 12:24:25.455020  653531 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0701 12:24:25.460209  653531 start.go:562] Will wait 60s for crictl version
	I0701 12:24:25.460266  653531 ssh_runner.go:195] Run: which crictl
	I0701 12:24:25.463721  653531 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0701 12:24:25.498358  653531 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.1
	RuntimeApiVersion:  v1
	I0701 12:24:25.498453  653531 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0701 12:24:25.525766  653531 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0701 12:24:25.549708  653531 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.1 ...
	I0701 12:24:25.549757  653531 main.go:141] libmachine: (ha-735960) Calling .GetIP
	I0701 12:24:25.552699  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:25.553097  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:25.553132  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:25.553374  653531 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0701 12:24:25.557331  653531 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 12:24:25.569653  653531 kubeadm.go:877] updating cluster {Name:ha-735960 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cl
usterName:ha-735960 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.60 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:fa
lse freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0701 12:24:25.569810  653531 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 12:24:25.569866  653531 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0701 12:24:25.593428  653531 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	kindest/kindnetd:v20240513-cd2ac642
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0701 12:24:25.593450  653531 docker.go:615] Images already preloaded, skipping extraction
	I0701 12:24:25.593535  653531 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0701 12:24:25.613507  653531 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	kindest/kindnetd:v20240513-cd2ac642
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0701 12:24:25.613542  653531 cache_images.go:84] Images are preloaded, skipping loading
	I0701 12:24:25.613557  653531 kubeadm.go:928] updating node { 192.168.39.16 8443 v1.30.2 docker true true} ...
	I0701 12:24:25.613677  653531 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-735960 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.16
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-735960 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0701 12:24:25.613736  653531 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0701 12:24:25.636959  653531 cni.go:84] Creating CNI manager for ""
	I0701 12:24:25.636987  653531 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0701 12:24:25.637001  653531 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0701 12:24:25.637033  653531 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.16 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-735960 NodeName:ha-735960 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.16"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.16 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0701 12:24:25.637207  653531 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.16
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-735960"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.16
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.16"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0701 12:24:25.637234  653531 kube-vip.go:115] generating kube-vip config ...
	I0701 12:24:25.637291  653531 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0701 12:24:25.651059  653531 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0701 12:24:25.651192  653531 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0701 12:24:25.651261  653531 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0701 12:24:25.660952  653531 binaries.go:44] Found k8s binaries, skipping transfer
	I0701 12:24:25.661049  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0701 12:24:25.669901  653531 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0701 12:24:25.685801  653531 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0701 12:24:25.701259  653531 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0701 12:24:25.717237  653531 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0701 12:24:25.732682  653531 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0701 12:24:25.736549  653531 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 12:24:25.748348  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:24:25.857797  653531 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0701 12:24:25.874307  653531 certs.go:68] Setting up /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960 for IP: 192.168.39.16
	I0701 12:24:25.874340  653531 certs.go:194] generating shared ca certs ...
	I0701 12:24:25.874365  653531 certs.go:226] acquiring lock for ca certs: {Name:mk34e166bfd069e523b2325e14d1812c523bff53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:24:25.874584  653531 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key
	I0701 12:24:25.874645  653531 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key
	I0701 12:24:25.874659  653531 certs.go:256] generating profile certs ...
	I0701 12:24:25.874733  653531 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/client.key
	I0701 12:24:25.874814  653531 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key.5c21f4af
	I0701 12:24:25.874868  653531 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.key
	I0701 12:24:25.874883  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0701 12:24:25.874918  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0701 12:24:25.874937  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0701 12:24:25.874955  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0701 12:24:25.874972  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0701 12:24:25.874991  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0701 12:24:25.875008  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0701 12:24:25.875025  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0701 12:24:25.875093  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem (1338 bytes)
	W0701 12:24:25.875146  653531 certs.go:480] ignoring /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854_empty.pem, impossibly tiny 0 bytes
	I0701 12:24:25.875161  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem (1675 bytes)
	I0701 12:24:25.875193  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem (1078 bytes)
	I0701 12:24:25.875224  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem (1123 bytes)
	I0701 12:24:25.875261  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem (1675 bytes)
	I0701 12:24:25.875343  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem (1708 bytes)
	I0701 12:24:25.875386  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> /usr/share/ca-certificates/6378542.pem
	I0701 12:24:25.875409  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:24:25.875426  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem -> /usr/share/ca-certificates/637854.pem
	I0701 12:24:25.876083  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0701 12:24:25.910761  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0701 12:24:25.938480  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0701 12:24:25.963281  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0701 12:24:25.989413  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0701 12:24:26.015055  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0701 12:24:26.039406  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0701 12:24:26.062955  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0701 12:24:26.093960  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem --> /usr/share/ca-certificates/6378542.pem (1708 bytes)
	I0701 12:24:26.125896  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0701 12:24:26.156031  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem --> /usr/share/ca-certificates/637854.pem (1338 bytes)
	I0701 12:24:26.181375  653531 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0701 12:24:26.209470  653531 ssh_runner.go:195] Run: openssl version
	I0701 12:24:26.218386  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6378542.pem && ln -fs /usr/share/ca-certificates/6378542.pem /etc/ssl/certs/6378542.pem"
	I0701 12:24:26.233243  653531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6378542.pem
	I0701 12:24:26.241811  653531 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  1 12:11 /usr/share/ca-certificates/6378542.pem
	I0701 12:24:26.241888  653531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6378542.pem
	I0701 12:24:26.250559  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6378542.pem /etc/ssl/certs/3ec20f2e.0"
	I0701 12:24:26.277768  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0701 12:24:26.305594  653531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:24:26.315685  653531 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  1 12:05 /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:24:26.315763  653531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:24:26.330923  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0701 12:24:26.351095  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/637854.pem && ln -fs /usr/share/ca-certificates/637854.pem /etc/ssl/certs/637854.pem"
	I0701 12:24:26.374355  653531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/637854.pem
	I0701 12:24:26.380759  653531 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  1 12:11 /usr/share/ca-certificates/637854.pem
	I0701 12:24:26.380836  653531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/637854.pem
	I0701 12:24:26.392584  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/637854.pem /etc/ssl/certs/51391683.0"
	I0701 12:24:26.411160  653531 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0701 12:24:26.419483  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0701 12:24:26.437558  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0701 12:24:26.444826  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0701 12:24:26.454628  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0701 12:24:26.467473  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0701 12:24:26.476039  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0701 12:24:26.482296  653531 kubeadm.go:391] StartCluster: {Name:ha-735960 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-735960 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.60 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false
freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 12:24:26.482508  653531 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0701 12:24:26.498609  653531 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0701 12:24:26.509374  653531 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0701 12:24:26.509403  653531 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0701 12:24:26.509410  653531 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0701 12:24:26.509466  653531 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0701 12:24:26.518865  653531 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0701 12:24:26.519310  653531 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-735960" does not appear in /home/jenkins/minikube-integration/19166-630650/kubeconfig
	I0701 12:24:26.519460  653531 kubeconfig.go:62] /home/jenkins/minikube-integration/19166-630650/kubeconfig needs updating (will repair): [kubeconfig missing "ha-735960" cluster setting kubeconfig missing "ha-735960" context setting]
	I0701 12:24:26.519772  653531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19166-630650/kubeconfig: {Name:mke3ef9d019eff4edd273b00c416fd77ed009242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:24:26.520253  653531 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19166-630650/kubeconfig
	I0701 12:24:26.520566  653531 kapi.go:59] client config for ha-735960: &rest.Config{Host:"https://192.168.39.16:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/client.crt", KeyFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/client.key", CAFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(ni
l)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfbb80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0701 12:24:26.521041  653531 cert_rotation.go:137] Starting client certificate rotation controller
	I0701 12:24:26.521235  653531 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0701 12:24:26.530555  653531 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.16
	I0701 12:24:26.530586  653531 kubeadm.go:591] duration metric: took 21.167521ms to restartPrimaryControlPlane
	I0701 12:24:26.530596  653531 kubeadm.go:393] duration metric: took 48.31583ms to StartCluster
	I0701 12:24:26.530618  653531 settings.go:142] acquiring lock: {Name:mk6f7c85ea77a73ff0ac851454721f2e6e309153 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:24:26.530700  653531 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19166-630650/kubeconfig
	I0701 12:24:26.531272  653531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19166-630650/kubeconfig: {Name:mke3ef9d019eff4edd273b00c416fd77ed009242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:24:26.531528  653531 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:24:26.531554  653531 start.go:240] waiting for startup goroutines ...
	I0701 12:24:26.531572  653531 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0701 12:24:26.531767  653531 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:24:26.534496  653531 out.go:177] * Enabled addons: 
	I0701 12:24:26.535873  653531 addons.go:510] duration metric: took 4.304011ms for enable addons: enabled=[]
	I0701 12:24:26.535915  653531 start.go:245] waiting for cluster config update ...
	I0701 12:24:26.535925  653531 start.go:254] writing updated cluster config ...
	I0701 12:24:26.537498  653531 out.go:177] 
	I0701 12:24:26.539211  653531 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:24:26.539336  653531 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
	I0701 12:24:26.541509  653531 out.go:177] * Starting "ha-735960-m02" control-plane node in "ha-735960" cluster
	I0701 12:24:26.542802  653531 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 12:24:26.542833  653531 cache.go:56] Caching tarball of preloaded images
	I0701 12:24:26.542967  653531 preload.go:173] Found /home/jenkins/minikube-integration/19166-630650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0701 12:24:26.542983  653531 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0701 12:24:26.543093  653531 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
	I0701 12:24:26.543293  653531 start.go:360] acquireMachinesLock for ha-735960-m02: {Name:mk43a6c0c0c15a237623df377ad65b82186c0ad8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:24:26.543355  653531 start.go:364] duration metric: took 39.786µs to acquireMachinesLock for "ha-735960-m02"
	I0701 12:24:26.543382  653531 start.go:96] Skipping create...Using existing machine configuration
	I0701 12:24:26.543393  653531 fix.go:54] fixHost starting: m02
	I0701 12:24:26.543665  653531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:24:26.543694  653531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:24:26.558741  653531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34329
	I0701 12:24:26.559300  653531 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:24:26.559767  653531 main.go:141] libmachine: Using API Version  1
	I0701 12:24:26.559790  653531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:24:26.560107  653531 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:24:26.560324  653531 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:24:26.560471  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetState
	I0701 12:24:26.561903  653531 fix.go:112] recreateIfNeeded on ha-735960-m02: state=Stopped err=<nil>
	I0701 12:24:26.561933  653531 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	W0701 12:24:26.562104  653531 fix.go:138] unexpected machine state, will restart: <nil>
	I0701 12:24:26.564118  653531 out.go:177] * Restarting existing kvm2 VM for "ha-735960-m02" ...
	I0701 12:24:26.565547  653531 main.go:141] libmachine: (ha-735960-m02) Calling .Start
	I0701 12:24:26.565742  653531 main.go:141] libmachine: (ha-735960-m02) Ensuring networks are active...
	I0701 12:24:26.566439  653531 main.go:141] libmachine: (ha-735960-m02) Ensuring network default is active
	I0701 12:24:26.566739  653531 main.go:141] libmachine: (ha-735960-m02) Ensuring network mk-ha-735960 is active
	I0701 12:24:26.567095  653531 main.go:141] libmachine: (ha-735960-m02) Getting domain xml...
	I0701 12:24:26.567681  653531 main.go:141] libmachine: (ha-735960-m02) Creating domain...
	I0701 12:24:27.772734  653531 main.go:141] libmachine: (ha-735960-m02) Waiting to get IP...
	I0701 12:24:27.773478  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:27.773801  653531 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:24:27.773853  653531 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:24:27.773777  653719 retry.go:31] will retry after 217.058414ms: waiting for machine to come up
	I0701 12:24:27.992187  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:27.992715  653531 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:24:27.992745  653531 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:24:27.992653  653719 retry.go:31] will retry after 295.156992ms: waiting for machine to come up
	I0701 12:24:28.289101  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:28.289597  653531 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:24:28.289630  653531 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:24:28.289531  653719 retry.go:31] will retry after 353.406325ms: waiting for machine to come up
	I0701 12:24:28.644006  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:28.644479  653531 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:24:28.644510  653531 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:24:28.644437  653719 retry.go:31] will retry after 398.224689ms: waiting for machine to come up
	I0701 12:24:29.044072  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:29.044514  653531 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:24:29.044545  653531 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:24:29.044461  653719 retry.go:31] will retry after 547.020131ms: waiting for machine to come up
	I0701 12:24:29.593264  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:29.593690  653531 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:24:29.593709  653531 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:24:29.593653  653719 retry.go:31] will retry after 787.756844ms: waiting for machine to come up
	I0701 12:24:30.382731  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:30.383180  653531 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:24:30.383209  653531 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:24:30.383137  653719 retry.go:31] will retry after 870.067991ms: waiting for machine to come up
	I0701 12:24:31.254672  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:31.255252  653531 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:24:31.255285  653531 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:24:31.255205  653719 retry.go:31] will retry after 1.371479719s: waiting for machine to come up
	I0701 12:24:32.628605  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:32.629092  653531 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:24:32.629124  653531 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:24:32.629036  653719 retry.go:31] will retry after 1.347043223s: waiting for machine to come up
	I0701 12:24:33.978739  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:33.979246  653531 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:24:33.979275  653531 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:24:33.979195  653719 retry.go:31] will retry after 2.257830197s: waiting for machine to come up
	I0701 12:24:36.239828  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:36.240400  653531 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:24:36.240433  653531 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:24:36.240355  653719 retry.go:31] will retry after 2.834526493s: waiting for machine to come up
	I0701 12:24:39.078121  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:39.078416  653531 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:24:39.078448  653531 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:24:39.078379  653719 retry.go:31] will retry after 2.465969863s: waiting for machine to come up
	I0701 12:24:41.547043  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:41.547535  653531 main.go:141] libmachine: (ha-735960-m02) Found IP for machine: 192.168.39.86
	I0701 12:24:41.547569  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has current primary IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:41.547579  653531 main.go:141] libmachine: (ha-735960-m02) Reserving static IP address...
	I0701 12:24:41.547991  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "ha-735960-m02", mac: "52:54:00:0b:2f:ce", ip: "192.168.39.86"} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:41.548015  653531 main.go:141] libmachine: (ha-735960-m02) Reserved static IP address: 192.168.39.86
	I0701 12:24:41.548032  653531 main.go:141] libmachine: (ha-735960-m02) DBG | skip adding static IP to network mk-ha-735960 - found existing host DHCP lease matching {name: "ha-735960-m02", mac: "52:54:00:0b:2f:ce", ip: "192.168.39.86"}
	I0701 12:24:41.548045  653531 main.go:141] libmachine: (ha-735960-m02) DBG | Getting to WaitForSSH function...
	I0701 12:24:41.548059  653531 main.go:141] libmachine: (ha-735960-m02) Waiting for SSH to be available...
	I0701 12:24:41.550171  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:41.550523  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:41.550552  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:41.550644  653531 main.go:141] libmachine: (ha-735960-m02) DBG | Using SSH client type: external
	I0701 12:24:41.550670  653531 main.go:141] libmachine: (ha-735960-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa (-rw-------)
	I0701 12:24:41.550719  653531 main.go:141] libmachine: (ha-735960-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.86 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0701 12:24:41.550739  653531 main.go:141] libmachine: (ha-735960-m02) DBG | About to run SSH command:
	I0701 12:24:41.550754  653531 main.go:141] libmachine: (ha-735960-m02) DBG | exit 0
	I0701 12:24:41.678305  653531 main.go:141] libmachine: (ha-735960-m02) DBG | SSH cmd err, output: <nil>: 
	I0701 12:24:41.678691  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetConfigRaw
	I0701 12:24:41.679334  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetIP
	I0701 12:24:41.682006  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:41.682508  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:41.682540  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:41.682792  653531 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
	I0701 12:24:41.683005  653531 machine.go:94] provisionDockerMachine start ...
	I0701 12:24:41.683030  653531 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:24:41.683290  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:24:41.685599  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:41.685951  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:41.685979  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:41.686153  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:24:41.686378  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:41.686551  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:41.686684  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:24:41.686822  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:24:41.687030  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0701 12:24:41.687043  653531 main.go:141] libmachine: About to run SSH command:
	hostname
	I0701 12:24:41.802622  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0701 12:24:41.802657  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetMachineName
	I0701 12:24:41.802940  653531 buildroot.go:166] provisioning hostname "ha-735960-m02"
	I0701 12:24:41.802963  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetMachineName
	I0701 12:24:41.803281  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:24:41.805937  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:41.806443  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:41.806470  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:41.806608  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:24:41.806785  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:41.807003  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:41.807154  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:24:41.807371  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:24:41.807554  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0701 12:24:41.807567  653531 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-735960-m02 && echo "ha-735960-m02" | sudo tee /etc/hostname
	I0701 12:24:41.938306  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-735960-m02
	
	I0701 12:24:41.938353  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:24:41.941077  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:41.941535  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:41.941592  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:41.941765  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:24:41.941994  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:41.942161  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:41.942290  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:24:41.942491  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:24:41.942676  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0701 12:24:41.942701  653531 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-735960-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-735960-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-735960-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0701 12:24:42.062715  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0701 12:24:42.062750  653531 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19166-630650/.minikube CaCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19166-630650/.minikube}
	I0701 12:24:42.062772  653531 buildroot.go:174] setting up certificates
	I0701 12:24:42.062785  653531 provision.go:84] configureAuth start
	I0701 12:24:42.062795  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetMachineName
	I0701 12:24:42.063134  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetIP
	I0701 12:24:42.065907  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:42.066246  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:42.066279  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:42.066490  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:24:42.068450  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:42.068818  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:42.068843  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:42.068957  653531 provision.go:143] copyHostCerts
	I0701 12:24:42.068988  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem
	I0701 12:24:42.069022  653531 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem, removing ...
	I0701 12:24:42.069030  653531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem
	I0701 12:24:42.069082  653531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem (1078 bytes)
	I0701 12:24:42.069156  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem
	I0701 12:24:42.069173  653531 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem, removing ...
	I0701 12:24:42.069180  653531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem
	I0701 12:24:42.069199  653531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem (1123 bytes)
	I0701 12:24:42.069241  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem
	I0701 12:24:42.069257  653531 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem, removing ...
	I0701 12:24:42.069263  653531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem
	I0701 12:24:42.069279  653531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem (1675 bytes)
	I0701 12:24:42.069326  653531 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem org=jenkins.ha-735960-m02 san=[127.0.0.1 192.168.39.86 ha-735960-m02 localhost minikube]
	I0701 12:24:42.315961  653531 provision.go:177] copyRemoteCerts
	I0701 12:24:42.316035  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0701 12:24:42.316061  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:24:42.318992  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:42.319361  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:42.319395  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:42.319557  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:24:42.319740  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:42.319969  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:24:42.320092  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa Username:docker}
	I0701 12:24:42.408924  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0701 12:24:42.408999  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0701 12:24:42.434942  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0701 12:24:42.435038  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0701 12:24:42.458628  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0701 12:24:42.458728  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0701 12:24:42.482505  653531 provision.go:87] duration metric: took 419.705556ms to configureAuth
	I0701 12:24:42.482536  653531 buildroot.go:189] setting minikube options for container-runtime
	I0701 12:24:42.482760  653531 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:24:42.482797  653531 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:24:42.483103  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:24:42.485829  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:42.486249  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:42.486277  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:42.486574  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:24:42.486846  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:42.487031  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:42.487211  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:24:42.487420  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:24:42.487596  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0701 12:24:42.487608  653531 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0701 12:24:42.603937  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0701 12:24:42.603962  653531 buildroot.go:70] root file system type: tmpfs
	I0701 12:24:42.604101  653531 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0701 12:24:42.604123  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:24:42.606937  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:42.607326  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:42.607351  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:42.607512  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:24:42.607762  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:42.607935  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:42.608131  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:24:42.608318  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:24:42.608490  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0701 12:24:42.608578  653531 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.39.16"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0701 12:24:42.731927  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.39.16
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0701 12:24:42.731963  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:24:42.735092  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:42.735552  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:42.735586  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:42.735721  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:24:42.735916  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:42.736097  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:42.736226  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:24:42.736425  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:24:42.736596  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0701 12:24:42.736613  653531 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0701 12:24:44.641546  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0701 12:24:44.641584  653531 machine.go:97] duration metric: took 2.958558644s to provisionDockerMachine
	I0701 12:24:44.641601  653531 start.go:293] postStartSetup for "ha-735960-m02" (driver="kvm2")
	I0701 12:24:44.641615  653531 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0701 12:24:44.641637  653531 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:24:44.642004  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0701 12:24:44.642040  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:24:44.645224  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:44.645706  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:44.645738  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:44.645868  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:24:44.646053  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:44.646222  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:24:44.646376  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa Username:docker}
	I0701 12:24:44.736407  653531 ssh_runner.go:195] Run: cat /etc/os-release
	I0701 12:24:44.740656  653531 info.go:137] Remote host: Buildroot 2023.02.9
	I0701 12:24:44.740682  653531 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-630650/.minikube/addons for local assets ...
	I0701 12:24:44.740758  653531 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-630650/.minikube/files for local assets ...
	I0701 12:24:44.740835  653531 filesync.go:149] local asset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> 6378542.pem in /etc/ssl/certs
	I0701 12:24:44.740848  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> /etc/ssl/certs/6378542.pem
	I0701 12:24:44.740945  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0701 12:24:44.749928  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem --> /etc/ssl/certs/6378542.pem (1708 bytes)
	I0701 12:24:44.772391  653531 start.go:296] duration metric: took 130.772957ms for postStartSetup
	I0701 12:24:44.772467  653531 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:24:44.772787  653531 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0701 12:24:44.772824  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:24:44.775217  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:44.775582  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:44.775607  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:44.775804  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:24:44.776027  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:44.776203  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:24:44.776383  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa Username:docker}
	I0701 12:24:44.864587  653531 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0701 12:24:44.864665  653531 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0701 12:24:44.904439  653531 fix.go:56] duration metric: took 18.361036234s for fixHost
	I0701 12:24:44.904495  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:24:44.907382  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:44.907911  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:44.907944  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:44.908260  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:24:44.908504  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:44.908689  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:44.908847  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:24:44.909036  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:24:44.909257  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0701 12:24:44.909273  653531 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0701 12:24:45.022815  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719836684.998547011
	
	I0701 12:24:45.022845  653531 fix.go:216] guest clock: 1719836684.998547011
	I0701 12:24:45.022855  653531 fix.go:229] Guest: 2024-07-01 12:24:44.998547011 +0000 UTC Remote: 2024-07-01 12:24:44.904469964 +0000 UTC m=+42.374321626 (delta=94.077047ms)
	I0701 12:24:45.022878  653531 fix.go:200] guest clock delta is within tolerance: 94.077047ms
	I0701 12:24:45.022885  653531 start.go:83] releasing machines lock for "ha-735960-m02", held for 18.479517819s
	I0701 12:24:45.022904  653531 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:24:45.023158  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetIP
	I0701 12:24:45.025946  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:45.026429  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:45.026468  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:45.028669  653531 out.go:177] * Found network options:
	I0701 12:24:45.030344  653531 out.go:177]   - NO_PROXY=192.168.39.16
	W0701 12:24:45.031921  653531 proxy.go:119] fail to check proxy env: Error ip not in block
	I0701 12:24:45.031959  653531 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:24:45.032658  653531 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:24:45.032888  653531 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:24:45.033013  653531 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0701 12:24:45.033058  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	W0701 12:24:45.033081  653531 proxy.go:119] fail to check proxy env: Error ip not in block
	I0701 12:24:45.033171  653531 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0701 12:24:45.033195  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:24:45.035752  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:45.035981  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:45.036219  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:45.036245  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:45.036348  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:45.036378  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:45.036406  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:24:45.036593  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:24:45.036652  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:45.036754  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:45.036826  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:24:45.036903  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:24:45.036969  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa Username:docker}
	I0701 12:24:45.037025  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa Username:docker}
	W0701 12:24:45.137872  653531 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0701 12:24:45.137946  653531 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0701 12:24:45.154683  653531 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0701 12:24:45.154717  653531 start.go:494] detecting cgroup driver to use...
	I0701 12:24:45.154827  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 12:24:45.176886  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0701 12:24:45.188345  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0701 12:24:45.197947  653531 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0701 12:24:45.198012  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0701 12:24:45.207676  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0701 12:24:45.217559  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0701 12:24:45.227803  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0701 12:24:45.238295  653531 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0701 12:24:45.248764  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0701 12:24:45.258909  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0701 12:24:45.268726  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0701 12:24:45.279039  653531 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0701 12:24:45.288042  653531 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0701 12:24:45.296914  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:24:45.411404  653531 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0701 12:24:45.436012  653531 start.go:494] detecting cgroup driver to use...
	I0701 12:24:45.436122  653531 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0701 12:24:45.450142  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 12:24:45.462829  653531 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0701 12:24:45.481152  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 12:24:45.494283  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 12:24:45.507074  653531 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0701 12:24:45.534155  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 12:24:45.547185  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 12:24:45.564773  653531 ssh_runner.go:195] Run: which cri-dockerd
	I0701 12:24:45.568760  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0701 12:24:45.577542  653531 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0701 12:24:45.593021  653531 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0701 12:24:45.701211  653531 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0701 12:24:45.815750  653531 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0701 12:24:45.815810  653531 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0701 12:24:45.831989  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:24:45.941168  653531 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0701 12:24:48.340550  653531 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.399331326s)
	I0701 12:24:48.340643  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0701 12:24:48.354582  653531 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0701 12:24:48.370449  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0701 12:24:48.383634  653531 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0701 12:24:48.491334  653531 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0701 12:24:48.612412  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:24:48.742773  653531 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0701 12:24:48.759856  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0701 12:24:48.772621  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:24:48.884376  653531 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0701 12:24:48.964457  653531 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0701 12:24:48.964538  653531 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0701 12:24:48.970016  653531 start.go:562] Will wait 60s for crictl version
	I0701 12:24:48.970082  653531 ssh_runner.go:195] Run: which crictl
	I0701 12:24:48.974017  653531 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0701 12:24:49.010380  653531 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.1
	RuntimeApiVersion:  v1
	I0701 12:24:49.010470  653531 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0701 12:24:49.038204  653531 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0701 12:24:49.060452  653531 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.1 ...
	I0701 12:24:49.061662  653531 out.go:177]   - env NO_PROXY=192.168.39.16
	I0701 12:24:49.062894  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetIP
	I0701 12:24:49.065420  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:49.065726  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:49.065756  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:49.065973  653531 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0701 12:24:49.070110  653531 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 12:24:49.082188  653531 mustload.go:65] Loading cluster: ha-735960
	I0701 12:24:49.082530  653531 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:24:49.082941  653531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:24:49.082993  653531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:24:49.097892  653531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43153
	I0701 12:24:49.098396  653531 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:24:49.098894  653531 main.go:141] libmachine: Using API Version  1
	I0701 12:24:49.098917  653531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:24:49.099215  653531 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:24:49.099436  653531 main.go:141] libmachine: (ha-735960) Calling .GetState
	I0701 12:24:49.100798  653531 host.go:66] Checking if "ha-735960" exists ...
	I0701 12:24:49.101079  653531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:24:49.101112  653531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:24:49.115736  653531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34567
	I0701 12:24:49.116185  653531 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:24:49.116654  653531 main.go:141] libmachine: Using API Version  1
	I0701 12:24:49.116678  653531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:24:49.117007  653531 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:24:49.117203  653531 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:24:49.117366  653531 certs.go:68] Setting up /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960 for IP: 192.168.39.86
	I0701 12:24:49.117380  653531 certs.go:194] generating shared ca certs ...
	I0701 12:24:49.117398  653531 certs.go:226] acquiring lock for ca certs: {Name:mk34e166bfd069e523b2325e14d1812c523bff53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:24:49.117551  653531 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key
	I0701 12:24:49.117591  653531 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key
	I0701 12:24:49.117600  653531 certs.go:256] generating profile certs ...
	I0701 12:24:49.117669  653531 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/client.key
	I0701 12:24:49.117728  653531 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key.b19d6c48
	I0701 12:24:49.117760  653531 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.key
	I0701 12:24:49.117771  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0701 12:24:49.117786  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0701 12:24:49.117800  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0701 12:24:49.117811  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0701 12:24:49.117823  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0701 12:24:49.117835  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0701 12:24:49.117847  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0701 12:24:49.117858  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0701 12:24:49.117903  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem (1338 bytes)
	W0701 12:24:49.117934  653531 certs.go:480] ignoring /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854_empty.pem, impossibly tiny 0 bytes
	I0701 12:24:49.117946  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem (1675 bytes)
	I0701 12:24:49.117973  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem (1078 bytes)
	I0701 12:24:49.117994  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem (1123 bytes)
	I0701 12:24:49.118013  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem (1675 bytes)
	I0701 12:24:49.118048  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem (1708 bytes)
	I0701 12:24:49.118076  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem -> /usr/share/ca-certificates/637854.pem
	I0701 12:24:49.118092  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> /usr/share/ca-certificates/6378542.pem
	I0701 12:24:49.118104  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:24:49.118150  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:24:49.120907  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:49.121392  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:49.121418  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:49.121523  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:24:49.121694  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:49.121825  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:24:49.121959  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa Username:docker}
	I0701 12:24:49.190715  653531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0701 12:24:49.195755  653531 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0701 12:24:49.206197  653531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0701 12:24:49.209869  653531 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0701 12:24:49.219170  653531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0701 12:24:49.223114  653531 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0701 12:24:49.233000  653531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0701 12:24:49.237162  653531 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0701 12:24:49.246812  653531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0701 12:24:49.250554  653531 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0701 12:24:49.259926  653531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0701 12:24:49.263843  653531 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0701 12:24:49.274536  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0701 12:24:49.299467  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0701 12:24:49.322887  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0701 12:24:49.345311  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0701 12:24:49.367988  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0701 12:24:49.390632  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0701 12:24:49.416047  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0701 12:24:49.439560  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0701 12:24:49.462382  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem --> /usr/share/ca-certificates/637854.pem (1338 bytes)
	I0701 12:24:49.484590  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem --> /usr/share/ca-certificates/6378542.pem (1708 bytes)
	I0701 12:24:49.507507  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0701 12:24:49.529932  653531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0701 12:24:49.545966  653531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0701 12:24:49.561557  653531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0701 12:24:49.577402  653531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0701 12:24:49.593250  653531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0701 12:24:49.609739  653531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0701 12:24:49.626015  653531 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0701 12:24:49.643897  653531 ssh_runner.go:195] Run: openssl version
	I0701 12:24:49.649608  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/637854.pem && ln -fs /usr/share/ca-certificates/637854.pem /etc/ssl/certs/637854.pem"
	I0701 12:24:49.660203  653531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/637854.pem
	I0701 12:24:49.664449  653531 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  1 12:11 /usr/share/ca-certificates/637854.pem
	I0701 12:24:49.664503  653531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/637854.pem
	I0701 12:24:49.670228  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/637854.pem /etc/ssl/certs/51391683.0"
	I0701 12:24:49.680554  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6378542.pem && ln -fs /usr/share/ca-certificates/6378542.pem /etc/ssl/certs/6378542.pem"
	I0701 12:24:49.690901  653531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6378542.pem
	I0701 12:24:49.695200  653531 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  1 12:11 /usr/share/ca-certificates/6378542.pem
	I0701 12:24:49.695266  653531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6378542.pem
	I0701 12:24:49.700503  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6378542.pem /etc/ssl/certs/3ec20f2e.0"
	I0701 12:24:49.710442  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0701 12:24:49.720297  653531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:24:49.724530  653531 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  1 12:05 /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:24:49.724590  653531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:24:49.729832  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0701 12:24:49.739574  653531 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0701 12:24:49.743717  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0701 12:24:49.749498  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0701 12:24:49.755217  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0701 12:24:49.761210  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0701 12:24:49.767138  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0701 12:24:49.772853  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0701 12:24:49.778598  653531 kubeadm.go:928] updating node {m02 192.168.39.86 8443 v1.30.2 docker true true} ...
	I0701 12:24:49.778706  653531 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-735960-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.86
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-735960 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0701 12:24:49.778735  653531 kube-vip.go:115] generating kube-vip config ...
	I0701 12:24:49.778769  653531 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0701 12:24:49.792722  653531 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0701 12:24:49.792794  653531 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0701 12:24:49.792861  653531 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0701 12:24:49.804161  653531 binaries.go:44] Found k8s binaries, skipping transfer
	I0701 12:24:49.804241  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0701 12:24:49.814550  653531 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0701 12:24:49.831390  653531 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0701 12:24:49.848397  653531 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0701 12:24:49.865443  653531 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0701 12:24:49.869104  653531 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 12:24:49.880669  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:24:49.995061  653531 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0701 12:24:50.012084  653531 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:24:50.012461  653531 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:24:50.014165  653531 out.go:177] * Verifying Kubernetes components...
	I0701 12:24:50.015753  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:24:50.164868  653531 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0701 12:24:50.189841  653531 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19166-630650/kubeconfig
	I0701 12:24:50.190056  653531 kapi.go:59] client config for ha-735960: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/client.crt", KeyFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/client.key", CAFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfbb80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0701 12:24:50.190130  653531 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.16:8443
	I0701 12:24:50.190323  653531 node_ready.go:35] waiting up to 6m0s for node "ha-735960-m02" to be "Ready" ...
	I0701 12:24:50.190456  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:24:50.190466  653531 round_trippers.go:469] Request Headers:
	I0701 12:24:50.190477  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:24:50.190487  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:24:54.343288  653531 round_trippers.go:574] Response Status:  in 4152 milliseconds
	I0701 12:24:55.343662  653531 with_retry.go:234] Got a Retry-After 1s response for attempt 1 to https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:24:55.343730  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:24:55.343744  653531 round_trippers.go:469] Request Headers:
	I0701 12:24:55.343754  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:24:55.343758  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:24:55.344302  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:24:55.344422  653531 node_ready.go:53] error getting node "ha-735960-m02": Get "https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.1:52872->192.168.39.16:8443: read: connection reset by peer
	I0701 12:24:55.344514  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:24:55.344528  653531 round_trippers.go:469] Request Headers:
	I0701 12:24:55.344538  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:24:55.344544  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:24:55.344874  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:24:55.691490  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:24:55.691516  653531 round_trippers.go:469] Request Headers:
	I0701 12:24:55.691527  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:24:55.691533  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:24:55.691976  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:24:56.190655  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:24:56.190680  653531 round_trippers.go:469] Request Headers:
	I0701 12:24:56.190689  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:24:56.190694  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:24:56.191223  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:24:56.690634  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:24:56.690660  653531 round_trippers.go:469] Request Headers:
	I0701 12:24:56.690669  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:24:56.690672  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:24:56.691171  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:24:57.190543  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:24:57.190576  653531 round_trippers.go:469] Request Headers:
	I0701 12:24:57.190588  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:24:57.190593  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:24:57.191164  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:24:57.691155  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:24:57.691185  653531 round_trippers.go:469] Request Headers:
	I0701 12:24:57.691197  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:24:57.691205  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:24:57.691722  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:24:57.691807  653531 node_ready.go:53] error getting node "ha-735960-m02": Get "https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02": dial tcp 192.168.39.16:8443: connect: connection refused
	I0701 12:24:58.190799  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:24:58.190827  653531 round_trippers.go:469] Request Headers:
	I0701 12:24:58.190841  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:24:58.190847  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:24:58.191262  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:24:58.690909  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:24:58.690934  653531 round_trippers.go:469] Request Headers:
	I0701 12:24:58.690943  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:24:58.690947  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:24:58.691435  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:24:59.191343  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:24:59.191369  653531 round_trippers.go:469] Request Headers:
	I0701 12:24:59.191379  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:24:59.191385  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:24:59.191790  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:24:59.691540  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:24:59.691570  653531 round_trippers.go:469] Request Headers:
	I0701 12:24:59.691582  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:24:59.691587  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:24:59.692063  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:24:59.692155  653531 node_ready.go:53] error getting node "ha-735960-m02": Get "https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02": dial tcp 192.168.39.16:8443: connect: connection refused
	I0701 12:25:00.190742  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:00.190767  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:00.190776  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:00.190780  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:00.191351  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:00.691648  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:00.691679  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:00.691691  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:00.691697  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:00.692126  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:01.190745  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:01.190769  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:01.190778  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:01.190784  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:01.191282  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:01.691565  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:01.691597  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:01.691614  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:01.691621  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:01.692000  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:02.191662  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:02.191693  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:02.191706  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:02.191714  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:02.192140  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:02.192224  653531 node_ready.go:53] error getting node "ha-735960-m02": Get "https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02": dial tcp 192.168.39.16:8443: connect: connection refused
	I0701 12:25:02.691148  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:02.691173  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:02.691180  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:02.691185  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:02.691566  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:03.190561  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:03.190591  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:03.190603  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:03.190611  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:03.191147  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:03.690811  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:03.690839  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:03.690849  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:03.690854  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:03.691458  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:04.191099  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:04.191130  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:04.191142  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:04.191147  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:04.191609  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:04.691342  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:04.691368  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:04.691376  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:04.691380  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:04.691811  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:04.691897  653531 node_ready.go:53] error getting node "ha-735960-m02": Get "https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02": dial tcp 192.168.39.16:8443: connect: connection refused
	I0701 12:25:05.191508  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:05.191532  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:05.191540  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:05.191550  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:05.192027  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:05.690552  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:05.690579  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:05.690588  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:05.690592  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:05.691114  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:06.190741  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:06.190773  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:06.190785  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:06.190790  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:06.191210  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:06.690600  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:06.690630  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:06.690640  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:06.690646  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:06.691129  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:07.191607  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:07.191631  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:07.191639  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:07.191643  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:07.192193  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:07.192283  653531 node_ready.go:53] error getting node "ha-735960-m02": Get "https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02": dial tcp 192.168.39.16:8443: connect: connection refused
	I0701 12:25:07.691099  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:07.691129  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:07.691140  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:07.691145  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:07.691572  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:08.191598  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:08.191623  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:08.191632  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:08.191636  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:08.192026  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:08.690679  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:08.690702  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:08.690713  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:08.690717  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:08.691142  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:09.190900  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:09.190924  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:09.190932  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:09.190938  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:09.191395  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:09.690594  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:09.690615  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:09.690623  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:09.690629  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:11.690040  653531 round_trippers.go:574] Response Status: 200 OK in 1999 milliseconds
	I0701 12:25:11.702263  653531 node_ready.go:49] node "ha-735960-m02" has status "Ready":"True"
	I0701 12:25:11.702299  653531 node_ready.go:38] duration metric: took 21.511933368s for node "ha-735960-m02" to be "Ready" ...
	I0701 12:25:11.702313  653531 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 12:25:11.702416  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods
	I0701 12:25:11.702430  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:11.702441  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:11.702454  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:11.789461  653531 round_trippers.go:574] Response Status: 200 OK in 86 milliseconds
	I0701 12:25:11.802344  653531 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace to be "Ready" ...
	I0701 12:25:11.802466  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:11.802476  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:11.802483  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:11.802487  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:11.816015  653531 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0701 12:25:11.816768  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:11.816789  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:11.816801  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:11.816808  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:11.831063  653531 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0701 12:25:12.302968  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:12.302992  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:12.303000  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:12.303004  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:12.307067  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:12.308122  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:12.308138  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:12.308146  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:12.308150  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:12.311874  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:12.803638  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:12.803667  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:12.803679  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:12.803686  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:12.814049  653531 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0701 12:25:12.814887  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:12.814910  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:12.814921  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:12.814925  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:12.821738  653531 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0701 12:25:13.303576  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:13.303600  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:13.303608  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:13.303614  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:13.307218  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:13.308090  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:13.308106  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:13.308113  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:13.308117  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:13.311302  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:13.803234  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:13.803266  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:13.803274  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:13.803277  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:13.806287  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:13.807004  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:13.807020  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:13.807029  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:13.807032  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:13.809746  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:13.810211  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:14.302637  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:14.302668  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:14.302676  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:14.302680  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:14.306137  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:14.306904  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:14.306920  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:14.306928  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:14.306932  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:14.309754  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:14.802564  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:14.802587  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:14.802595  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:14.802599  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:14.808775  653531 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0701 12:25:14.809568  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:14.809588  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:14.809596  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:14.809601  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:14.812414  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:15.303353  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:15.303378  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:15.303386  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:15.303391  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:15.306881  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:15.307679  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:15.307702  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:15.307712  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:15.307721  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:15.310551  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:15.802545  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:15.802569  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:15.802577  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:15.802582  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:15.806303  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:15.807445  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:15.807462  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:15.807473  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:15.807479  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:15.813688  653531 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0701 12:25:15.814187  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:16.303627  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:16.303655  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:16.303664  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:16.303667  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:16.307153  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:16.307819  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:16.307838  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:16.307848  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:16.307854  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:16.317298  653531 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0701 12:25:16.802946  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:16.802971  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:16.802979  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:16.802985  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:16.806421  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:16.807100  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:16.807120  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:16.807130  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:16.807135  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:16.809697  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:17.302581  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:17.302628  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:17.302640  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:17.302648  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:17.307226  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:17.307905  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:17.307922  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:17.307929  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:17.307936  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:17.311203  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:17.803470  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:17.803514  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:17.803526  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:17.803531  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:17.812734  653531 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0701 12:25:17.813577  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:17.813595  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:17.813601  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:17.813608  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:17.818648  653531 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0701 12:25:17.819270  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:18.302575  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:18.302597  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:18.302605  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:18.302610  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:18.306847  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:18.307906  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:18.307927  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:18.307937  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:18.307943  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:18.310841  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:18.802657  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:18.802681  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:18.802689  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:18.802692  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:18.805685  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:18.806415  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:18.806434  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:18.806444  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:18.806451  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:18.809781  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:19.303618  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:19.303642  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:19.303650  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:19.303655  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:19.307473  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:19.308257  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:19.308275  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:19.308282  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:19.308286  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:19.311108  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:19.802669  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:19.802691  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:19.802700  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:19.802703  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:19.805915  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:19.806623  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:19.806641  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:19.806648  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:19.806653  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:19.809291  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:20.303135  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:20.303161  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:20.303169  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:20.303173  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:20.306861  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:20.307600  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:20.307618  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:20.307626  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:20.307630  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:20.310953  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:20.311503  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:20.803608  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:20.803633  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:20.803642  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:20.803645  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:20.807878  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:20.808941  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:20.808961  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:20.808969  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:20.808973  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:20.811817  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:21.303623  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:21.303648  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:21.303658  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:21.303662  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:21.307962  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:21.308821  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:21.308839  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:21.308846  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:21.308850  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:21.311792  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:21.803197  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:21.803227  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:21.803239  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:21.803244  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:21.806108  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:21.807085  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:21.807105  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:21.807138  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:21.807147  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:21.809757  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:22.302567  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:22.302593  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:22.302601  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:22.302608  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:22.306177  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:22.307066  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:22.307082  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:22.307091  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:22.307097  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:22.309849  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:22.803488  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:22.803511  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:22.803519  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:22.803523  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:22.807098  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:22.807809  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:22.807828  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:22.807839  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:22.807846  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:22.810906  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:22.811518  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:23.303611  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:23.303700  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:23.303719  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:23.303725  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:23.307759  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:23.308638  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:23.308659  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:23.308669  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:23.308674  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:23.312265  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:23.803188  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:23.803211  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:23.803222  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:23.803227  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:23.808854  653531 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0701 12:25:23.810030  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:23.810047  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:23.810057  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:23.810066  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:23.813689  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:24.303587  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:24.303609  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:24.303617  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:24.303622  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:24.306935  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:24.307770  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:24.307786  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:24.307794  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:24.307798  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:24.318402  653531 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0701 12:25:24.803269  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:24.803292  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:24.803302  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:24.803307  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:24.806559  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:24.807235  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:24.807252  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:24.807259  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:24.807264  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:24.809568  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:25.303424  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:25.303447  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:25.303457  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:25.303462  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:25.306169  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:25.306850  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:25.306869  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:25.306877  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:25.306881  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:25.309797  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:25.310316  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:25.803598  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:25.803625  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:25.803636  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:25.803641  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:25.807180  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:25.808080  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:25.808098  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:25.808106  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:25.808110  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:25.810694  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:26.303736  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:26.303758  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:26.303769  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:26.303774  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:26.307524  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:26.308268  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:26.308293  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:26.308304  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:26.308309  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:26.311520  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:26.803295  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:26.803319  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:26.803328  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:26.803332  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:26.806546  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:26.807183  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:26.807197  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:26.807204  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:26.807208  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:26.809974  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:27.302802  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:27.302827  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:27.302836  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:27.302840  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:27.305889  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:27.306573  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:27.306591  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:27.306598  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:27.306602  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:27.309203  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:27.802871  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:27.802896  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:27.802904  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:27.802908  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:27.806439  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:27.807255  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:27.807275  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:27.807283  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:27.807286  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:27.810137  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:27.810761  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:28.303255  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:28.303283  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:28.303295  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:28.303300  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:28.306809  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:28.307731  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:28.307752  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:28.307762  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:28.307768  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:28.311028  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:28.802544  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:28.802570  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:28.802580  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:28.802585  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:28.805960  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:28.806724  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:28.806740  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:28.806815  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:28.806826  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:28.809472  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:29.303397  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:29.303427  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:29.303438  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:29.303443  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:29.306785  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:29.307565  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:29.307584  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:29.307592  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:29.307596  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:29.310517  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:29.802683  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:29.802709  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:29.802717  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:29.802720  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:29.806680  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:29.807385  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:29.807404  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:29.807414  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:29.807420  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:29.810474  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:29.811143  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:30.303599  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:30.303629  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:30.303639  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:30.303643  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:30.307801  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:30.308475  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:30.308491  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:30.308498  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:30.308503  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:30.311947  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:30.802655  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:30.802680  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:30.802688  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:30.802692  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:30.806031  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:30.806743  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:30.806762  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:30.806769  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:30.806774  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:30.809315  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:31.303311  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:31.303340  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:31.303350  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:31.303354  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:31.306583  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:31.307361  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:31.307384  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:31.307395  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:31.307399  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:31.311058  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:31.802712  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:31.802740  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:31.802749  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:31.802753  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:31.806584  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:31.807317  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:31.807336  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:31.807347  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:31.807361  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:31.810401  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:32.303636  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:32.303663  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:32.303671  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:32.303676  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:32.307011  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:32.307797  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:32.307815  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:32.307825  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:32.307831  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:32.314944  653531 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0701 12:25:32.315492  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:32.802803  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:32.802830  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:32.802838  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:32.802844  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:32.807127  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:32.807884  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:32.807907  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:32.807917  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:32.807922  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:32.811565  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:33.303372  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:33.303399  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:33.303416  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:33.303421  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:33.307271  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:33.307961  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:33.307981  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:33.307988  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:33.308001  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:33.310760  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:33.802604  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:33.802631  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:33.802640  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:33.802643  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:33.806300  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:33.807219  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:33.807238  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:33.807245  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:33.807250  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:33.810578  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:34.303606  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:34.303632  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:34.303640  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:34.303644  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:34.308029  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:34.309132  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:34.309159  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:34.309172  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:34.309180  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:34.313056  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:34.803231  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:34.803261  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:34.803273  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:34.803278  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:34.806971  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:34.807591  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:34.807609  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:34.807617  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:34.807621  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:34.810457  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:34.810998  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:35.303350  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:35.303377  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:35.303386  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:35.303390  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:35.307557  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:35.310343  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:35.310361  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:35.310370  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:35.310374  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:35.314047  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:35.803318  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:35.803343  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:35.803352  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:35.803355  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:35.806663  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:35.807415  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:35.807435  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:35.807451  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:35.807460  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:35.810577  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:36.303513  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:36.303545  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:36.303577  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:36.303584  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:36.307367  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:36.308070  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:36.308089  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:36.308100  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:36.308106  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:36.312298  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:36.803266  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:36.803291  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:36.803299  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:36.803303  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:36.807158  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:36.807888  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:36.807906  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:36.807913  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:36.807918  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:36.811315  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:36.811752  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:37.303051  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:37.303079  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:37.303090  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:37.303094  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:37.307312  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:37.308243  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:37.308264  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:37.308275  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:37.308282  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:37.311883  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:37.802545  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:37.802572  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:37.802581  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:37.802585  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:37.805697  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:37.806592  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:37.806612  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:37.806622  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:37.806627  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:37.809149  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:38.302574  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:38.302602  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:38.302615  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:38.302621  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:38.306531  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:38.307159  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:38.307178  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:38.307189  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:38.307193  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:38.310496  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:38.803467  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:38.803495  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:38.803504  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:38.803509  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:38.807052  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:38.807927  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:38.807944  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:38.807951  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:38.807956  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:38.810712  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:39.302764  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:39.302790  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:39.302801  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:39.302805  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:39.306507  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:39.307614  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:39.307633  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:39.307641  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:39.307645  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:39.311327  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:39.311854  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:39.803193  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:39.803216  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:39.803225  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:39.803229  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:39.806519  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:39.807496  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:39.807515  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:39.807525  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:39.807532  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:39.810711  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:40.303599  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:40.303624  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:40.303633  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:40.303637  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:40.307414  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:40.308201  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:40.308227  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:40.308236  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:40.308242  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:40.313547  653531 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0701 12:25:40.803513  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:40.803535  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:40.803543  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:40.803548  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:40.806979  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:40.807738  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:40.807753  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:40.807761  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:40.807765  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:40.810649  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:41.303319  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:41.303343  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:41.303351  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:41.303355  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:41.307376  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:41.307943  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:41.307958  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:41.307965  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:41.307970  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:41.311161  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:41.803525  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:41.803549  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:41.803556  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:41.803559  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:41.806564  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:41.807431  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:41.807453  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:41.807464  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:41.807470  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:41.810527  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:41.811143  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:42.303619  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:42.303650  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:42.303662  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:42.303670  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:42.307838  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:42.308516  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:42.308536  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:42.308544  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:42.308550  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:42.312418  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:42.803505  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:42.803530  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:42.803540  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:42.803543  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:42.807116  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:42.808027  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:42.808044  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:42.808051  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:42.808055  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:42.810713  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:43.303632  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:43.303654  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:43.303664  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:43.303668  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:43.307247  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:43.307986  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:43.308002  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:43.308009  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:43.308013  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:43.310824  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:43.802592  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:43.802620  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:43.802628  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:43.802632  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:43.806238  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:43.807037  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:43.807059  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:43.807072  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:43.807076  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:43.809889  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:44.302994  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:44.303018  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:44.303026  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:44.303030  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:44.306644  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:44.307454  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:44.307470  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:44.307478  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:44.307482  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:44.311122  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:44.311762  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:44.803237  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:44.803267  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:44.803279  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:44.803286  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:44.807350  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:44.808020  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:44.808038  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:44.808045  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:44.808051  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:44.810846  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:45.302711  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:45.302735  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:45.302744  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:45.302748  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:45.306615  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:45.307478  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:45.307497  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:45.307508  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:45.307514  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:45.310453  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:45.803401  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:45.803428  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:45.803439  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:45.803444  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:45.807308  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:45.808014  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:45.808029  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:45.808036  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:45.808039  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:45.810822  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:46.302557  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:46.302584  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:46.302597  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:46.302601  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:46.306132  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:46.306862  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:46.306879  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:46.306888  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:46.306894  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:46.310611  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:46.803427  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:46.803455  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:46.803467  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:46.803474  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:46.807174  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:46.807896  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:46.807913  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:46.807921  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:46.807924  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:46.810938  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:46.811392  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:47.302820  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:47.302850  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:47.302859  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:47.302863  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:47.306419  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:47.307190  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:47.307211  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:47.307218  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:47.307222  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:47.309980  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:47.803501  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:47.803525  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:47.803534  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:47.803537  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:47.808075  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:47.808877  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:47.808896  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:47.808905  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:47.808910  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:47.815820  653531 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0701 12:25:48.302668  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:48.302699  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:48.302709  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:48.302716  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:48.308126  653531 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0701 12:25:48.308931  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:48.308949  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:48.308960  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:48.308965  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:48.317071  653531 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0701 12:25:48.802646  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:48.802669  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:48.802678  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:48.802682  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:48.807515  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:48.808381  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:48.808403  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:48.808413  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:48.808422  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:48.811034  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:48.811475  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:49.303193  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:49.303217  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:49.303225  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:49.303230  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:49.307574  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:49.308269  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:49.308285  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:49.308293  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:49.308297  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:49.312047  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:49.802745  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:49.802768  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:49.802776  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:49.802780  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:49.806546  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:49.807294  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:49.807313  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:49.807321  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:49.807326  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:49.810700  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:50.303644  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:50.303674  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:50.303684  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:50.303688  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:50.308034  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:50.308788  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:50.308807  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:50.308817  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:50.308823  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:50.313190  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:50.802959  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:50.802983  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:50.802992  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:50.802996  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:50.806875  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:50.807540  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:50.807558  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:50.807566  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:50.807571  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:50.810319  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:51.303292  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:51.303322  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:51.303334  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:51.303339  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:51.307067  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:51.307838  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:51.307858  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:51.307869  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:51.307875  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:51.312843  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:51.313579  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:51.803287  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:51.803312  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:51.803323  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:51.803329  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:51.807231  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:51.807995  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:51.808012  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:51.808020  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:51.808024  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:51.810740  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:52.303605  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:52.303629  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:52.303638  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:52.303643  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:52.306821  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:52.307565  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:52.307584  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:52.307594  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:52.307602  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:52.311075  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:52.803586  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:52.803610  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:52.803619  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:52.803623  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:52.807457  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:52.808236  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:52.808255  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:52.808266  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:52.808272  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:52.811703  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:53.303621  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:53.303644  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:53.303652  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:53.303656  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:53.310115  653531 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0701 12:25:53.310845  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:53.310863  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:53.310874  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:53.310878  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:53.313553  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:53.314016  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:53.803325  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:53.803349  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:53.803357  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:53.803361  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:53.806896  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:53.807585  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:53.807601  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:53.807608  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:53.807613  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:53.810245  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:54.302928  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:54.302952  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:54.302960  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:54.302963  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:54.306523  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:54.307165  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:54.307184  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:54.307195  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:54.307203  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:54.310455  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:54.803344  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:54.803367  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:54.803377  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:54.803380  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:54.806607  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:54.807210  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:54.807225  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:54.807233  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:54.807236  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:54.809746  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:55.303597  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:55.303623  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:55.303633  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:55.303637  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:55.307054  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:55.307759  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:55.307774  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:55.307781  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:55.307788  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:55.313043  653531 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0701 12:25:55.802698  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:55.802725  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:55.802736  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:55.802745  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:55.805918  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:55.806665  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:55.806682  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:55.806690  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:55.806694  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:55.809347  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:55.809833  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:56.303433  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:56.303460  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:56.303471  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:56.303479  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:56.307327  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:56.308094  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:56.308118  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:56.308126  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:56.308130  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:56.311241  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:56.803577  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:56.803605  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:56.803612  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:56.803616  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:56.806932  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:56.807699  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:56.807716  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:56.807724  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:56.807727  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:56.812547  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:57.303545  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:57.303573  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:57.303582  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:57.303586  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:57.307516  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:57.308162  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:57.308179  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:57.308186  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:57.308193  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:57.310961  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:57.803457  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:57.803482  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:57.803493  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:57.803500  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:57.807806  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:57.808679  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:57.808694  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:57.808704  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:57.808711  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:57.811544  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:57.811984  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:58.303446  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:58.303471  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:58.303480  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:58.303484  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:58.307082  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:58.307737  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:58.307754  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:58.307762  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:58.307770  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:58.310778  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:58.803647  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:58.803671  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:58.803680  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:58.803690  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:58.807621  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:58.808241  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:58.808258  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:58.808266  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:58.808271  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:58.811002  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:59.302934  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:59.302961  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:59.302971  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:59.302976  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:59.306476  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:59.307188  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:59.307205  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:59.307213  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:59.307216  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:59.312012  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:59.803004  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:59.803028  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:59.803037  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:59.803041  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:59.806220  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:59.807058  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:59.807077  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:59.807083  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:59.807087  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:59.810042  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:59.810618  653531 pod_ready.go:92] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"True"
	I0701 12:25:59.810639  653531 pod_ready.go:81] duration metric: took 48.008262746s for pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace to be "Ready" ...
	I0701 12:25:59.810648  653531 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-p4rtz" in "kube-system" namespace to be "Ready" ...
	I0701 12:25:59.810702  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-p4rtz
	I0701 12:25:59.810709  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:59.810716  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:59.810720  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:59.813396  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:59.813957  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:59.813972  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:59.813979  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:59.813982  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:59.816606  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:59.816994  653531 pod_ready.go:92] pod "coredns-7db6d8ff4d-p4rtz" in "kube-system" namespace has status "Ready":"True"
	I0701 12:25:59.817012  653531 pod_ready.go:81] duration metric: took 6.357752ms for pod "coredns-7db6d8ff4d-p4rtz" in "kube-system" namespace to be "Ready" ...
	I0701 12:25:59.817021  653531 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:25:59.817069  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960
	I0701 12:25:59.817076  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:59.817084  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:59.817090  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:59.819509  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:59.819970  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:59.819984  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:59.819991  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:59.819995  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:59.822382  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:59.822919  653531 pod_ready.go:92] pod "etcd-ha-735960" in "kube-system" namespace has status "Ready":"True"
	I0701 12:25:59.822941  653531 pod_ready.go:81] duration metric: took 5.912537ms for pod "etcd-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:25:59.822951  653531 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:25:59.823013  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m02
	I0701 12:25:59.823021  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:59.823028  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:59.823032  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:59.825241  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:59.825771  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:59.825785  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:59.825791  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:59.825795  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:59.828111  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:59.828706  653531 pod_ready.go:92] pod "etcd-ha-735960-m02" in "kube-system" namespace has status "Ready":"True"
	I0701 12:25:59.828725  653531 pod_ready.go:81] duration metric: took 5.760203ms for pod "etcd-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:25:59.828740  653531 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:25:59.828804  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:25:59.828813  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:59.828820  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:59.828827  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:59.832068  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:59.832863  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:25:59.832878  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:59.832885  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:59.832892  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:59.835452  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:59.835992  653531 pod_ready.go:97] node "ha-735960-m03" hosting pod "etcd-ha-735960-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-735960-m03" has status "Ready":"Unknown"
	I0701 12:25:59.836024  653531 pod_ready.go:81] duration metric: took 7.273472ms for pod "etcd-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	E0701 12:25:59.836031  653531 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-735960-m03" hosting pod "etcd-ha-735960-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-735960-m03" has status "Ready":"Unknown"
	I0701 12:25:59.836046  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:00.003492  653531 request.go:629] Waited for 167.376104ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960
	I0701 12:26:00.003566  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960
	I0701 12:26:00.003574  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:00.003585  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:00.003603  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:00.011681  653531 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0701 12:26:00.203578  653531 request.go:629] Waited for 191.210292ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:26:00.203641  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:26:00.203647  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:00.203654  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:00.203664  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:00.207391  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:00.207910  653531 pod_ready.go:92] pod "kube-apiserver-ha-735960" in "kube-system" namespace has status "Ready":"True"
	I0701 12:26:00.207934  653531 pod_ready.go:81] duration metric: took 371.877302ms for pod "kube-apiserver-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:00.207946  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:00.403020  653531 request.go:629] Waited for 194.98389ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m02
	I0701 12:26:00.403111  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m02
	I0701 12:26:00.403119  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:00.403141  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:00.403168  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:00.406515  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:00.603670  653531 request.go:629] Waited for 196.408497ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:26:00.603756  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:26:00.603766  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:00.603776  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:00.603787  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:00.607641  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:00.608254  653531 pod_ready.go:92] pod "kube-apiserver-ha-735960-m02" in "kube-system" namespace has status "Ready":"True"
	I0701 12:26:00.608279  653531 pod_ready.go:81] duration metric: took 400.3268ms for pod "kube-apiserver-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:00.608290  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:00.803335  653531 request.go:629] Waited for 194.970976ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:26:00.803416  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:26:00.803423  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:00.803432  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:00.803437  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:00.806887  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:01.003849  653531 request.go:629] Waited for 196.371058ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:01.003924  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:01.003931  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:01.003942  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:01.003947  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:01.007167  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:01.007625  653531 pod_ready.go:97] node "ha-735960-m03" hosting pod "kube-apiserver-ha-735960-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-735960-m03" has status "Ready":"Unknown"
	I0701 12:26:01.007649  653531 pod_ready.go:81] duration metric: took 399.353356ms for pod "kube-apiserver-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	E0701 12:26:01.007659  653531 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-735960-m03" hosting pod "kube-apiserver-ha-735960-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-735960-m03" has status "Ready":"Unknown"
	I0701 12:26:01.007667  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:01.203752  653531 request.go:629] Waited for 195.992128ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-735960
	I0701 12:26:01.203816  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-735960
	I0701 12:26:01.203821  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:01.203829  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:01.203835  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:01.207391  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:01.403364  653531 request.go:629] Waited for 195.371527ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:26:01.403446  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:26:01.403452  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:01.403460  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:01.403464  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:01.406768  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:01.407262  653531 pod_ready.go:92] pod "kube-controller-manager-ha-735960" in "kube-system" namespace has status "Ready":"True"
	I0701 12:26:01.407282  653531 pod_ready.go:81] duration metric: took 399.606397ms for pod "kube-controller-manager-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:01.407291  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:01.603806  653531 request.go:629] Waited for 196.426419ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-735960-m02
	I0701 12:26:01.603868  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-735960-m02
	I0701 12:26:01.603877  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:01.603885  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:01.603889  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:01.607133  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:01.803115  653531 request.go:629] Waited for 195.29931ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:26:01.803195  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:26:01.803202  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:01.803213  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:01.803220  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:01.806296  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:01.806997  653531 pod_ready.go:92] pod "kube-controller-manager-ha-735960-m02" in "kube-system" namespace has status "Ready":"True"
	I0701 12:26:01.807020  653531 pod_ready.go:81] duration metric: took 399.723075ms for pod "kube-controller-manager-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:01.807032  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:02.003077  653531 request.go:629] Waited for 195.935538ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-735960-m03
	I0701 12:26:02.003184  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-735960-m03
	I0701 12:26:02.003199  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:02.003212  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:02.003220  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:02.008458  653531 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0701 12:26:02.203469  653531 request.go:629] Waited for 194.368942ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:02.203529  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:02.203535  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:02.203542  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:02.203546  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:02.207148  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:02.207764  653531 pod_ready.go:97] node "ha-735960-m03" hosting pod "kube-controller-manager-ha-735960-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-735960-m03" has status "Ready":"Unknown"
	I0701 12:26:02.207791  653531 pod_ready.go:81] duration metric: took 400.749537ms for pod "kube-controller-manager-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	E0701 12:26:02.207804  653531 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-735960-m03" hosting pod "kube-controller-manager-ha-735960-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-735960-m03" has status "Ready":"Unknown"
	I0701 12:26:02.207816  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-25ssf" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:02.403791  653531 request.go:629] Waited for 195.887211ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-25ssf
	I0701 12:26:02.403858  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-25ssf
	I0701 12:26:02.403864  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:02.403874  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:02.403879  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:02.407843  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:02.603935  653531 request.go:629] Waited for 195.282891ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m04
	I0701 12:26:02.604003  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m04
	I0701 12:26:02.604008  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:02.604017  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:02.604024  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:02.607222  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:02.607681  653531 pod_ready.go:97] node "ha-735960-m04" hosting pod "kube-proxy-25ssf" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-735960-m04" has status "Ready":"Unknown"
	I0701 12:26:02.607701  653531 pod_ready.go:81] duration metric: took 399.872451ms for pod "kube-proxy-25ssf" in "kube-system" namespace to be "Ready" ...
	E0701 12:26:02.607710  653531 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-735960-m04" hosting pod "kube-proxy-25ssf" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-735960-m04" has status "Ready":"Unknown"
	I0701 12:26:02.607715  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-776rt" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:02.803135  653531 request.go:629] Waited for 195.335441ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-776rt
	I0701 12:26:02.803208  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-776rt
	I0701 12:26:02.803214  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:02.803221  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:02.803229  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:02.806089  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:03.004065  653531 request.go:629] Waited for 197.373789ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:03.004141  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:03.004150  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:03.004158  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:03.004174  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:03.007294  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:03.007921  653531 pod_ready.go:97] node "ha-735960-m03" hosting pod "kube-proxy-776rt" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-735960-m03" has status "Ready":"Unknown"
	I0701 12:26:03.007945  653531 pod_ready.go:81] duration metric: took 400.223567ms for pod "kube-proxy-776rt" in "kube-system" namespace to be "Ready" ...
	E0701 12:26:03.007955  653531 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-735960-m03" hosting pod "kube-proxy-776rt" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-735960-m03" has status "Ready":"Unknown"
	I0701 12:26:03.007961  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b6knb" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:03.204042  653531 request.go:629] Waited for 195.997795ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b6knb
	I0701 12:26:03.204129  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b6knb
	I0701 12:26:03.204135  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:03.204143  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:03.204151  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:03.207989  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:03.404038  653531 request.go:629] Waited for 195.374708ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:26:03.404108  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:26:03.404113  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:03.404122  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:03.404127  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:03.407364  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:03.407859  653531 pod_ready.go:92] pod "kube-proxy-b6knb" in "kube-system" namespace has status "Ready":"True"
	I0701 12:26:03.407879  653531 pod_ready.go:81] duration metric: took 399.911763ms for pod "kube-proxy-b6knb" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:03.407889  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lphzn" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:03.603040  653531 request.go:629] Waited for 195.068023ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lphzn
	I0701 12:26:03.603123  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lphzn
	I0701 12:26:03.603128  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:03.603137  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:03.603141  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:03.606547  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:03.803798  653531 request.go:629] Waited for 196.387613ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:26:03.803870  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:26:03.803875  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:03.803883  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:03.803888  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:03.807381  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:03.807877  653531 pod_ready.go:92] pod "kube-proxy-lphzn" in "kube-system" namespace has status "Ready":"True"
	I0701 12:26:03.807898  653531 pod_ready.go:81] duration metric: took 400.000751ms for pod "kube-proxy-lphzn" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:03.807907  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:04.004031  653531 request.go:629] Waited for 196.031388ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960
	I0701 12:26:04.004089  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960
	I0701 12:26:04.004095  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:04.004107  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:04.004115  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:04.007598  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:04.204058  653531 request.go:629] Waited for 195.850938ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:26:04.204148  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:26:04.204158  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:04.204172  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:04.204181  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:04.207457  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:04.208086  653531 pod_ready.go:92] pod "kube-scheduler-ha-735960" in "kube-system" namespace has status "Ready":"True"
	I0701 12:26:04.208102  653531 pod_ready.go:81] duration metric: took 400.189366ms for pod "kube-scheduler-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:04.208112  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:04.403245  653531 request.go:629] Waited for 195.048743ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960-m02
	I0701 12:26:04.403318  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960-m02
	I0701 12:26:04.403323  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:04.403331  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:04.403335  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:04.406662  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:04.603781  653531 request.go:629] Waited for 196.396031ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:26:04.603851  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:26:04.603858  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:04.603868  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:04.603872  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:04.607382  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:04.607837  653531 pod_ready.go:92] pod "kube-scheduler-ha-735960-m02" in "kube-system" namespace has status "Ready":"True"
	I0701 12:26:04.607857  653531 pod_ready.go:81] duration metric: took 399.737176ms for pod "kube-scheduler-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:04.607869  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:04.803931  653531 request.go:629] Waited for 195.967281ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960-m03
	I0701 12:26:04.804004  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960-m03
	I0701 12:26:04.804010  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:04.804018  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:04.804025  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:04.807572  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:05.003764  653531 request.go:629] Waited for 195.365798ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:05.003830  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:05.003836  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:05.003844  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:05.003852  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:05.006888  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:05.007360  653531 pod_ready.go:97] node "ha-735960-m03" hosting pod "kube-scheduler-ha-735960-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-735960-m03" has status "Ready":"Unknown"
	I0701 12:26:05.007379  653531 pod_ready.go:81] duration metric: took 399.502183ms for pod "kube-scheduler-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	E0701 12:26:05.007388  653531 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-735960-m03" hosting pod "kube-scheduler-ha-735960-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-735960-m03" has status "Ready":"Unknown"
	I0701 12:26:05.007396  653531 pod_ready.go:38] duration metric: took 53.305072048s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 12:26:05.007419  653531 api_server.go:52] waiting for apiserver process to appear ...
	I0701 12:26:05.007525  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 12:26:05.023687  653531 logs.go:276] 2 containers: [f615f587cb12 c36c1d459356]
	I0701 12:26:05.023779  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 12:26:05.041137  653531 logs.go:276] 2 containers: [68c63c4abd01 dff0f4abea41]
	I0701 12:26:05.041235  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 12:26:05.059910  653531 logs.go:276] 0 containers: []
	W0701 12:26:05.059939  653531 logs.go:278] No container was found matching "coredns"
	I0701 12:26:05.060005  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 12:26:05.076858  653531 logs.go:276] 2 containers: [279483668a9c 58811626a0de]
	I0701 12:26:05.076953  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 12:26:05.091973  653531 logs.go:276] 2 containers: [156169e4ac3c 2885f7cf6f93]
	I0701 12:26:05.092072  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 12:26:05.109350  653531 logs.go:276] 2 containers: [a72e102b5bf7 a1160a455902]
	I0701 12:26:05.109445  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 12:26:05.126947  653531 logs.go:276] 2 containers: [c8184f4bc096 8c3a5ac0cf85]
	I0701 12:26:05.127013  653531 logs.go:123] Gathering logs for container status ...
	I0701 12:26:05.127032  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 12:26:05.172758  653531 logs.go:123] Gathering logs for describe nodes ...
	I0701 12:26:05.172800  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 12:26:05.530082  653531 logs.go:123] Gathering logs for kube-apiserver [f615f587cb12] ...
	I0701 12:26:05.530114  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f615f587cb12"
	I0701 12:26:05.563833  653531 logs.go:123] Gathering logs for kube-apiserver [c36c1d459356] ...
	I0701 12:26:05.563866  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36c1d459356"
	I0701 12:26:05.633259  653531 logs.go:123] Gathering logs for etcd [dff0f4abea41] ...
	I0701 12:26:05.633305  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dff0f4abea41"
	I0701 12:26:05.672146  653531 logs.go:123] Gathering logs for kube-scheduler [58811626a0de] ...
	I0701 12:26:05.672187  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58811626a0de"
	I0701 12:26:05.693508  653531 logs.go:123] Gathering logs for kube-proxy [2885f7cf6f93] ...
	I0701 12:26:05.693553  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2885f7cf6f93"
	I0701 12:26:05.717857  653531 logs.go:123] Gathering logs for Docker ...
	I0701 12:26:05.717889  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 12:26:05.766696  653531 logs.go:123] Gathering logs for dmesg ...
	I0701 12:26:05.766736  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 12:26:05.781553  653531 logs.go:123] Gathering logs for kube-proxy [156169e4ac3c] ...
	I0701 12:26:05.781587  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 156169e4ac3c"
	I0701 12:26:05.807724  653531 logs.go:123] Gathering logs for kindnet [8c3a5ac0cf85] ...
	I0701 12:26:05.807758  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3a5ac0cf85"
	I0701 12:26:05.830042  653531 logs.go:123] Gathering logs for etcd [68c63c4abd01] ...
	I0701 12:26:05.830072  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68c63c4abd01"
	I0701 12:26:05.862525  653531 logs.go:123] Gathering logs for kube-controller-manager [a72e102b5bf7] ...
	I0701 12:26:05.862568  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a72e102b5bf7"
	I0701 12:26:05.901329  653531 logs.go:123] Gathering logs for kube-controller-manager [a1160a455902] ...
	I0701 12:26:05.901370  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1160a455902"
	I0701 12:26:05.942097  653531 logs.go:123] Gathering logs for kindnet [c8184f4bc096] ...
	I0701 12:26:05.942139  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8184f4bc096"
	I0701 12:26:05.964792  653531 logs.go:123] Gathering logs for kubelet ...
	I0701 12:26:05.964829  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 12:26:06.027347  653531 logs.go:123] Gathering logs for kube-scheduler [279483668a9c] ...
	I0701 12:26:06.027394  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 279483668a9c"
	I0701 12:26:08.550396  653531 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 12:26:08.565837  653531 api_server.go:72] duration metric: took 1m18.553699317s to wait for apiserver process to appear ...
	I0701 12:26:08.565866  653531 api_server.go:88] waiting for apiserver healthz status ...
	I0701 12:26:08.565941  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 12:26:08.584274  653531 logs.go:276] 2 containers: [f615f587cb12 c36c1d459356]
	I0701 12:26:08.584349  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 12:26:08.601551  653531 logs.go:276] 2 containers: [68c63c4abd01 dff0f4abea41]
	I0701 12:26:08.601633  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 12:26:08.619657  653531 logs.go:276] 0 containers: []
	W0701 12:26:08.619687  653531 logs.go:278] No container was found matching "coredns"
	I0701 12:26:08.619744  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 12:26:08.637393  653531 logs.go:276] 2 containers: [279483668a9c 58811626a0de]
	I0701 12:26:08.637473  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 12:26:08.662222  653531 logs.go:276] 2 containers: [156169e4ac3c 2885f7cf6f93]
	I0701 12:26:08.662307  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 12:26:08.678542  653531 logs.go:276] 2 containers: [a72e102b5bf7 a1160a455902]
	I0701 12:26:08.678649  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 12:26:08.698914  653531 logs.go:276] 2 containers: [c8184f4bc096 8c3a5ac0cf85]
	I0701 12:26:08.698956  653531 logs.go:123] Gathering logs for kube-scheduler [58811626a0de] ...
	I0701 12:26:08.698968  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58811626a0de"
	I0701 12:26:08.722744  653531 logs.go:123] Gathering logs for kube-controller-manager [a72e102b5bf7] ...
	I0701 12:26:08.722780  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a72e102b5bf7"
	I0701 12:26:08.767782  653531 logs.go:123] Gathering logs for kindnet [8c3a5ac0cf85] ...
	I0701 12:26:08.767825  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3a5ac0cf85"
	I0701 12:26:08.792700  653531 logs.go:123] Gathering logs for Docker ...
	I0701 12:26:08.792731  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 12:26:08.841902  653531 logs.go:123] Gathering logs for container status ...
	I0701 12:26:08.841943  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 12:26:08.885531  653531 logs.go:123] Gathering logs for kubelet ...
	I0701 12:26:08.885563  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 12:26:08.940130  653531 logs.go:123] Gathering logs for etcd [68c63c4abd01] ...
	I0701 12:26:08.940179  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68c63c4abd01"
	I0701 12:26:08.973841  653531 logs.go:123] Gathering logs for etcd [dff0f4abea41] ...
	I0701 12:26:08.973883  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dff0f4abea41"
	I0701 12:26:09.008785  653531 logs.go:123] Gathering logs for kube-apiserver [f615f587cb12] ...
	I0701 12:26:09.008824  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f615f587cb12"
	I0701 12:26:09.040512  653531 logs.go:123] Gathering logs for kube-apiserver [c36c1d459356] ...
	I0701 12:26:09.040568  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36c1d459356"
	I0701 12:26:09.135818  653531 logs.go:123] Gathering logs for kube-scheduler [279483668a9c] ...
	I0701 12:26:09.135876  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 279483668a9c"
	I0701 12:26:09.158758  653531 logs.go:123] Gathering logs for describe nodes ...
	I0701 12:26:09.158802  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 12:26:09.415637  653531 logs.go:123] Gathering logs for kube-proxy [2885f7cf6f93] ...
	I0701 12:26:09.415685  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2885f7cf6f93"
	I0701 12:26:09.438064  653531 logs.go:123] Gathering logs for kindnet [c8184f4bc096] ...
	I0701 12:26:09.438104  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8184f4bc096"
	I0701 12:26:09.463612  653531 logs.go:123] Gathering logs for dmesg ...
	I0701 12:26:09.463666  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 12:26:09.477906  653531 logs.go:123] Gathering logs for kube-proxy [156169e4ac3c] ...
	I0701 12:26:09.477936  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 156169e4ac3c"
	I0701 12:26:09.501662  653531 logs.go:123] Gathering logs for kube-controller-manager [a1160a455902] ...
	I0701 12:26:09.501704  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1160a455902"
	I0701 12:26:12.049246  653531 api_server.go:253] Checking apiserver healthz at https://192.168.39.16:8443/healthz ...
	I0701 12:26:12.055739  653531 api_server.go:279] https://192.168.39.16:8443/healthz returned 200:
	ok
	I0701 12:26:12.055824  653531 round_trippers.go:463] GET https://192.168.39.16:8443/version
	I0701 12:26:12.055829  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:12.055837  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:12.055841  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:12.056892  653531 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0701 12:26:12.057034  653531 api_server.go:141] control plane version: v1.30.2
	I0701 12:26:12.057055  653531 api_server.go:131] duration metric: took 3.491183076s to wait for apiserver health ...
	I0701 12:26:12.057064  653531 system_pods.go:43] waiting for kube-system pods to appear ...
	I0701 12:26:12.057160  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 12:26:12.074309  653531 logs.go:276] 2 containers: [f615f587cb12 c36c1d459356]
	I0701 12:26:12.074405  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 12:26:12.100040  653531 logs.go:276] 2 containers: [68c63c4abd01 dff0f4abea41]
	I0701 12:26:12.100116  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 12:26:12.119321  653531 logs.go:276] 0 containers: []
	W0701 12:26:12.119352  653531 logs.go:278] No container was found matching "coredns"
	I0701 12:26:12.119406  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 12:26:12.137547  653531 logs.go:276] 2 containers: [279483668a9c 58811626a0de]
	I0701 12:26:12.137660  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 12:26:12.157321  653531 logs.go:276] 2 containers: [156169e4ac3c 2885f7cf6f93]
	I0701 12:26:12.157417  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 12:26:12.182117  653531 logs.go:276] 2 containers: [a72e102b5bf7 a1160a455902]
	I0701 12:26:12.182204  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 12:26:12.204201  653531 logs.go:276] 2 containers: [c8184f4bc096 8c3a5ac0cf85]
	I0701 12:26:12.204247  653531 logs.go:123] Gathering logs for kube-proxy [2885f7cf6f93] ...
	I0701 12:26:12.204260  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2885f7cf6f93"
	I0701 12:26:12.228173  653531 logs.go:123] Gathering logs for kube-controller-manager [a72e102b5bf7] ...
	I0701 12:26:12.228206  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a72e102b5bf7"
	I0701 12:26:12.267264  653531 logs.go:123] Gathering logs for kindnet [c8184f4bc096] ...
	I0701 12:26:12.267309  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8184f4bc096"
	I0701 12:26:12.294504  653531 logs.go:123] Gathering logs for Docker ...
	I0701 12:26:12.294535  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 12:26:12.344610  653531 logs.go:123] Gathering logs for describe nodes ...
	I0701 12:26:12.344649  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 12:26:12.593887  653531 logs.go:123] Gathering logs for kube-apiserver [c36c1d459356] ...
	I0701 12:26:12.593927  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36c1d459356"
	I0701 12:26:12.665033  653531 logs.go:123] Gathering logs for kube-proxy [156169e4ac3c] ...
	I0701 12:26:12.665082  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 156169e4ac3c"
	I0701 12:26:12.687103  653531 logs.go:123] Gathering logs for container status ...
	I0701 12:26:12.687142  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 12:26:12.735851  653531 logs.go:123] Gathering logs for kubelet ...
	I0701 12:26:12.735886  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 12:26:12.793127  653531 logs.go:123] Gathering logs for kube-apiserver [f615f587cb12] ...
	I0701 12:26:12.793168  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f615f587cb12"
	I0701 12:26:12.823004  653531 logs.go:123] Gathering logs for kindnet [8c3a5ac0cf85] ...
	I0701 12:26:12.823037  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3a5ac0cf85"
	I0701 12:26:12.862610  653531 logs.go:123] Gathering logs for kube-scheduler [279483668a9c] ...
	I0701 12:26:12.862650  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 279483668a9c"
	I0701 12:26:12.883651  653531 logs.go:123] Gathering logs for kube-scheduler [58811626a0de] ...
	I0701 12:26:12.883685  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58811626a0de"
	I0701 12:26:12.905351  653531 logs.go:123] Gathering logs for kube-controller-manager [a1160a455902] ...
	I0701 12:26:12.905388  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1160a455902"
	I0701 12:26:12.938388  653531 logs.go:123] Gathering logs for dmesg ...
	I0701 12:26:12.938427  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 12:26:12.955609  653531 logs.go:123] Gathering logs for etcd [68c63c4abd01] ...
	I0701 12:26:12.955647  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68c63c4abd01"
	I0701 12:26:12.987593  653531 logs.go:123] Gathering logs for etcd [dff0f4abea41] ...
	I0701 12:26:12.987626  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dff0f4abea41"
	I0701 12:26:15.520590  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods
	I0701 12:26:15.520616  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:15.520625  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:15.520628  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:15.528299  653531 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0701 12:26:15.535569  653531 system_pods.go:59] 26 kube-system pods found
	I0701 12:26:15.535603  653531 system_pods.go:61] "coredns-7db6d8ff4d-nk4lf" [c03dd635-a82d-4f18-bd72-ec575f91867e] Running
	I0701 12:26:15.535608  653531 system_pods.go:61] "coredns-7db6d8ff4d-p4rtz" [267efba7-bf34-48d5-ab15-5bda45ff2f4f] Running
	I0701 12:26:15.535613  653531 system_pods.go:61] "etcd-ha-735960" [4b98745c-292f-42b5-977c-69c50fd241f1] Running
	I0701 12:26:15.535617  653531 system_pods.go:61] "etcd-ha-735960-m02" [fed8cdfa-8428-47e0-84ef-05297ad232f8] Running
	I0701 12:26:15.535622  653531 system_pods.go:61] "etcd-ha-735960-m03" [50b07bc3-ff6b-487d-8654-901d96892868] Running
	I0701 12:26:15.535625  653531 system_pods.go:61] "kindnet-2424m" [aa18d5dd-f6eb-4f04-a61e-b0b257e214af] Running
	I0701 12:26:15.535628  653531 system_pods.go:61] "kindnet-6gx8s" [7f46a773-a075-476c-9e54-89f125b4b57a] Running
	I0701 12:26:15.535631  653531 system_pods.go:61] "kindnet-7f6hm" [a8c302b4-1163-4d4f-bfe3-4fd3b5d23cf0] Running
	I0701 12:26:15.535633  653531 system_pods.go:61] "kindnet-bztzv" [7afa0e45-3d10-40bc-b422-7005a3ca9d3a] Running
	I0701 12:26:15.535636  653531 system_pods.go:61] "kube-apiserver-ha-735960" [ad041aaa-465a-4d8a-a8dc-b7665e1d587d] Running
	I0701 12:26:15.535640  653531 system_pods.go:61] "kube-apiserver-ha-735960-m02" [ba28f48e-1c18-47e3-ab11-a9b5588c5c32] Running
	I0701 12:26:15.535642  653531 system_pods.go:61] "kube-apiserver-ha-735960-m03" [baafa3bf-78ee-4269-9591-b0440927e055] Running
	I0701 12:26:15.535646  653531 system_pods.go:61] "kube-controller-manager-ha-735960" [3f0f0cf5-329d-47bc-b922-7583902e2607] Running
	I0701 12:26:15.535649  653531 system_pods.go:61] "kube-controller-manager-ha-735960-m02" [258fde18-ac5c-4446-842b-9465529b154c] Running
	I0701 12:26:15.535652  653531 system_pods.go:61] "kube-controller-manager-ha-735960-m03" [79acc56b-a9e0-4d4b-bc64-1a3a36ddf051] Running
	I0701 12:26:15.535655  653531 system_pods.go:61] "kube-proxy-25ssf" [11f0dc23-ab9d-4d39-988d-4c44dfde86cd] Running
	I0701 12:26:15.535658  653531 system_pods.go:61] "kube-proxy-776rt" [5666dac7-924e-4429-bd1d-a1a5647cc611] Running
	I0701 12:26:15.535661  653531 system_pods.go:61] "kube-proxy-b6knb" [eb36e930-5799-4ff7-821a-ccb22303cd1b] Running
	I0701 12:26:15.535664  653531 system_pods.go:61] "kube-proxy-lphzn" [0761a7a6-740e-4cde-9ab5-e02e8d417907] Running
	I0701 12:26:15.535667  653531 system_pods.go:61] "kube-scheduler-ha-735960" [c624cf42-a7d6-4aaf-859d-1aeaf29f9acb] Running
	I0701 12:26:15.535670  653531 system_pods.go:61] "kube-scheduler-ha-735960-m02" [7de78af7-2d79-46dc-bd34-f221d79fde06] Running
	I0701 12:26:15.535673  653531 system_pods.go:61] "kube-scheduler-ha-735960-m03" [9f9a2030-9332-44af-b8dc-3b4609e53f91] Running
	I0701 12:26:15.535676  653531 system_pods.go:61] "kube-vip-ha-735960" [4299679a-c145-4f4f-8ec6-3cd468b98ef1] Running
	I0701 12:26:15.535679  653531 system_pods.go:61] "kube-vip-ha-735960-m02" [1c9b13e1-515c-43c0-8d99-5ad1c1807727] Running
	I0701 12:26:15.535684  653531 system_pods.go:61] "kube-vip-ha-735960-m03" [7069ea7c-5461-4fe6-a969-97fe33396ebb] Running
	I0701 12:26:15.535688  653531 system_pods.go:61] "storage-provisioner" [f5c4f7f9-d648-4019-a5ea-6ce59f6c5663] Running
	I0701 12:26:15.535693  653531 system_pods.go:74] duration metric: took 3.47862483s to wait for pod list to return data ...
	I0701 12:26:15.535701  653531 default_sa.go:34] waiting for default service account to be created ...
	I0701 12:26:15.535798  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/default/serviceaccounts
	I0701 12:26:15.535809  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:15.535816  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:15.535820  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:15.539198  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:15.539410  653531 default_sa.go:45] found service account: "default"
	I0701 12:26:15.539425  653531 default_sa.go:55] duration metric: took 3.71568ms for default service account to be created ...
	I0701 12:26:15.539433  653531 system_pods.go:116] waiting for k8s-apps to be running ...
	I0701 12:26:15.539483  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods
	I0701 12:26:15.539490  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:15.539497  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:15.539503  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:15.547242  653531 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0701 12:26:15.553992  653531 system_pods.go:86] 26 kube-system pods found
	I0701 12:26:15.554026  653531 system_pods.go:89] "coredns-7db6d8ff4d-nk4lf" [c03dd635-a82d-4f18-bd72-ec575f91867e] Running
	I0701 12:26:15.554034  653531 system_pods.go:89] "coredns-7db6d8ff4d-p4rtz" [267efba7-bf34-48d5-ab15-5bda45ff2f4f] Running
	I0701 12:26:15.554040  653531 system_pods.go:89] "etcd-ha-735960" [4b98745c-292f-42b5-977c-69c50fd241f1] Running
	I0701 12:26:15.554046  653531 system_pods.go:89] "etcd-ha-735960-m02" [fed8cdfa-8428-47e0-84ef-05297ad232f8] Running
	I0701 12:26:15.554050  653531 system_pods.go:89] "etcd-ha-735960-m03" [50b07bc3-ff6b-487d-8654-901d96892868] Running
	I0701 12:26:15.554056  653531 system_pods.go:89] "kindnet-2424m" [aa18d5dd-f6eb-4f04-a61e-b0b257e214af] Running
	I0701 12:26:15.554062  653531 system_pods.go:89] "kindnet-6gx8s" [7f46a773-a075-476c-9e54-89f125b4b57a] Running
	I0701 12:26:15.554069  653531 system_pods.go:89] "kindnet-7f6hm" [a8c302b4-1163-4d4f-bfe3-4fd3b5d23cf0] Running
	I0701 12:26:15.554075  653531 system_pods.go:89] "kindnet-bztzv" [7afa0e45-3d10-40bc-b422-7005a3ca9d3a] Running
	I0701 12:26:15.554081  653531 system_pods.go:89] "kube-apiserver-ha-735960" [ad041aaa-465a-4d8a-a8dc-b7665e1d587d] Running
	I0701 12:26:15.554088  653531 system_pods.go:89] "kube-apiserver-ha-735960-m02" [ba28f48e-1c18-47e3-ab11-a9b5588c5c32] Running
	I0701 12:26:15.554099  653531 system_pods.go:89] "kube-apiserver-ha-735960-m03" [baafa3bf-78ee-4269-9591-b0440927e055] Running
	I0701 12:26:15.554107  653531 system_pods.go:89] "kube-controller-manager-ha-735960" [3f0f0cf5-329d-47bc-b922-7583902e2607] Running
	I0701 12:26:15.554115  653531 system_pods.go:89] "kube-controller-manager-ha-735960-m02" [258fde18-ac5c-4446-842b-9465529b154c] Running
	I0701 12:26:15.554123  653531 system_pods.go:89] "kube-controller-manager-ha-735960-m03" [79acc56b-a9e0-4d4b-bc64-1a3a36ddf051] Running
	I0701 12:26:15.554131  653531 system_pods.go:89] "kube-proxy-25ssf" [11f0dc23-ab9d-4d39-988d-4c44dfde86cd] Running
	I0701 12:26:15.554140  653531 system_pods.go:89] "kube-proxy-776rt" [5666dac7-924e-4429-bd1d-a1a5647cc611] Running
	I0701 12:26:15.554148  653531 system_pods.go:89] "kube-proxy-b6knb" [eb36e930-5799-4ff7-821a-ccb22303cd1b] Running
	I0701 12:26:15.554163  653531 system_pods.go:89] "kube-proxy-lphzn" [0761a7a6-740e-4cde-9ab5-e02e8d417907] Running
	I0701 12:26:15.554170  653531 system_pods.go:89] "kube-scheduler-ha-735960" [c624cf42-a7d6-4aaf-859d-1aeaf29f9acb] Running
	I0701 12:26:15.554176  653531 system_pods.go:89] "kube-scheduler-ha-735960-m02" [7de78af7-2d79-46dc-bd34-f221d79fde06] Running
	I0701 12:26:15.554183  653531 system_pods.go:89] "kube-scheduler-ha-735960-m03" [9f9a2030-9332-44af-b8dc-3b4609e53f91] Running
	I0701 12:26:15.554192  653531 system_pods.go:89] "kube-vip-ha-735960" [4299679a-c145-4f4f-8ec6-3cd468b98ef1] Running
	I0701 12:26:15.554199  653531 system_pods.go:89] "kube-vip-ha-735960-m02" [1c9b13e1-515c-43c0-8d99-5ad1c1807727] Running
	I0701 12:26:15.554207  653531 system_pods.go:89] "kube-vip-ha-735960-m03" [7069ea7c-5461-4fe6-a969-97fe33396ebb] Running
	I0701 12:26:15.554216  653531 system_pods.go:89] "storage-provisioner" [f5c4f7f9-d648-4019-a5ea-6ce59f6c5663] Running
	I0701 12:26:15.554229  653531 system_pods.go:126] duration metric: took 14.787055ms to wait for k8s-apps to be running ...
	I0701 12:26:15.554241  653531 system_svc.go:44] waiting for kubelet service to be running ....
	I0701 12:26:15.554296  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 12:26:15.567890  653531 system_svc.go:56] duration metric: took 13.638054ms WaitForService to wait for kubelet
	I0701 12:26:15.567925  653531 kubeadm.go:576] duration metric: took 1m25.555790211s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 12:26:15.567951  653531 node_conditions.go:102] verifying NodePressure condition ...
	I0701 12:26:15.568047  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes
	I0701 12:26:15.568057  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:15.568067  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:15.568074  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:15.575311  653531 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0701 12:26:15.577277  653531 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0701 12:26:15.577310  653531 node_conditions.go:123] node cpu capacity is 2
	I0701 12:26:15.577328  653531 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0701 12:26:15.577334  653531 node_conditions.go:123] node cpu capacity is 2
	I0701 12:26:15.577339  653531 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0701 12:26:15.577343  653531 node_conditions.go:123] node cpu capacity is 2
	I0701 12:26:15.577348  653531 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0701 12:26:15.577352  653531 node_conditions.go:123] node cpu capacity is 2
	I0701 12:26:15.577358  653531 node_conditions.go:105] duration metric: took 9.401356ms to run NodePressure ...
	I0701 12:26:15.577372  653531 start.go:240] waiting for startup goroutines ...
	I0701 12:26:15.577418  653531 start.go:254] writing updated cluster config ...
	I0701 12:26:15.579876  653531 out.go:177] 
	I0701 12:26:15.581466  653531 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:26:15.581562  653531 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
	I0701 12:26:15.583519  653531 out.go:177] * Starting "ha-735960-m03" control-plane node in "ha-735960" cluster
	I0701 12:26:15.584707  653531 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 12:26:15.584732  653531 cache.go:56] Caching tarball of preloaded images
	I0701 12:26:15.584831  653531 preload.go:173] Found /home/jenkins/minikube-integration/19166-630650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0701 12:26:15.584841  653531 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0701 12:26:15.584932  653531 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
	I0701 12:26:15.585716  653531 start.go:360] acquireMachinesLock for ha-735960-m03: {Name:mk43a6c0c0c15a237623df377ad65b82186c0ad8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:26:15.585768  653531 start.go:364] duration metric: took 28.47µs to acquireMachinesLock for "ha-735960-m03"
	I0701 12:26:15.585785  653531 start.go:96] Skipping create...Using existing machine configuration
	I0701 12:26:15.585798  653531 fix.go:54] fixHost starting: m03
	I0701 12:26:15.586107  653531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:26:15.586143  653531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:26:15.603500  653531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43455
	I0701 12:26:15.603962  653531 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:26:15.604555  653531 main.go:141] libmachine: Using API Version  1
	I0701 12:26:15.604579  653531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:26:15.604930  653531 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:26:15.605195  653531 main.go:141] libmachine: (ha-735960-m03) Calling .DriverName
	I0701 12:26:15.605384  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetState
	I0701 12:26:15.607018  653531 fix.go:112] recreateIfNeeded on ha-735960-m03: state=Stopped err=<nil>
	I0701 12:26:15.607042  653531 main.go:141] libmachine: (ha-735960-m03) Calling .DriverName
	W0701 12:26:15.607213  653531 fix.go:138] unexpected machine state, will restart: <nil>
	I0701 12:26:15.609349  653531 out.go:177] * Restarting existing kvm2 VM for "ha-735960-m03" ...
	I0701 12:26:15.610714  653531 main.go:141] libmachine: (ha-735960-m03) Calling .Start
	I0701 12:26:15.610921  653531 main.go:141] libmachine: (ha-735960-m03) Ensuring networks are active...
	I0701 12:26:15.611706  653531 main.go:141] libmachine: (ha-735960-m03) Ensuring network default is active
	I0701 12:26:15.612087  653531 main.go:141] libmachine: (ha-735960-m03) Ensuring network mk-ha-735960 is active
	I0701 12:26:15.612457  653531 main.go:141] libmachine: (ha-735960-m03) Getting domain xml...
	I0701 12:26:15.613082  653531 main.go:141] libmachine: (ha-735960-m03) Creating domain...
	I0701 12:26:16.855928  653531 main.go:141] libmachine: (ha-735960-m03) Waiting to get IP...
	I0701 12:26:16.856767  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:16.857131  653531 main.go:141] libmachine: (ha-735960-m03) DBG | unable to find current IP address of domain ha-735960-m03 in network mk-ha-735960
	I0701 12:26:16.857182  653531 main.go:141] libmachine: (ha-735960-m03) DBG | I0701 12:26:16.857114  654164 retry.go:31] will retry after 232.687433ms: waiting for machine to come up
	I0701 12:26:17.091660  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:17.092187  653531 main.go:141] libmachine: (ha-735960-m03) DBG | unable to find current IP address of domain ha-735960-m03 in network mk-ha-735960
	I0701 12:26:17.092229  653531 main.go:141] libmachine: (ha-735960-m03) DBG | I0701 12:26:17.092112  654164 retry.go:31] will retry after 320.051772ms: waiting for machine to come up
	I0701 12:26:17.413613  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:17.414125  653531 main.go:141] libmachine: (ha-735960-m03) DBG | unable to find current IP address of domain ha-735960-m03 in network mk-ha-735960
	I0701 12:26:17.414157  653531 main.go:141] libmachine: (ha-735960-m03) DBG | I0701 12:26:17.414063  654164 retry.go:31] will retry after 415.446228ms: waiting for machine to come up
	I0701 12:26:17.830725  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:17.831413  653531 main.go:141] libmachine: (ha-735960-m03) DBG | unable to find current IP address of domain ha-735960-m03 in network mk-ha-735960
	I0701 12:26:17.831445  653531 main.go:141] libmachine: (ha-735960-m03) DBG | I0701 12:26:17.831349  654164 retry.go:31] will retry after 522.707968ms: waiting for machine to come up
	I0701 12:26:18.356092  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:18.356521  653531 main.go:141] libmachine: (ha-735960-m03) DBG | unable to find current IP address of domain ha-735960-m03 in network mk-ha-735960
	I0701 12:26:18.356543  653531 main.go:141] libmachine: (ha-735960-m03) DBG | I0701 12:26:18.356485  654164 retry.go:31] will retry after 572.783424ms: waiting for machine to come up
	I0701 12:26:18.931377  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:18.931831  653531 main.go:141] libmachine: (ha-735960-m03) DBG | unable to find current IP address of domain ha-735960-m03 in network mk-ha-735960
	I0701 12:26:18.931856  653531 main.go:141] libmachine: (ha-735960-m03) DBG | I0701 12:26:18.931778  654164 retry.go:31] will retry after 662.269299ms: waiting for machine to come up
	I0701 12:26:19.595406  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:19.595831  653531 main.go:141] libmachine: (ha-735960-m03) DBG | unable to find current IP address of domain ha-735960-m03 in network mk-ha-735960
	I0701 12:26:19.595862  653531 main.go:141] libmachine: (ha-735960-m03) DBG | I0701 12:26:19.595779  654164 retry.go:31] will retry after 965.977644ms: waiting for machine to come up
	I0701 12:26:20.562930  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:20.563372  653531 main.go:141] libmachine: (ha-735960-m03) DBG | unable to find current IP address of domain ha-735960-m03 in network mk-ha-735960
	I0701 12:26:20.563432  653531 main.go:141] libmachine: (ha-735960-m03) DBG | I0701 12:26:20.563328  654164 retry.go:31] will retry after 1.166893605s: waiting for machine to come up
	I0701 12:26:21.731632  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:21.732082  653531 main.go:141] libmachine: (ha-735960-m03) DBG | unable to find current IP address of domain ha-735960-m03 in network mk-ha-735960
	I0701 12:26:21.732114  653531 main.go:141] libmachine: (ha-735960-m03) DBG | I0701 12:26:21.732040  654164 retry.go:31] will retry after 1.800222328s: waiting for machine to come up
	I0701 12:26:23.534948  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:23.535342  653531 main.go:141] libmachine: (ha-735960-m03) DBG | unable to find current IP address of domain ha-735960-m03 in network mk-ha-735960
	I0701 12:26:23.535372  653531 main.go:141] libmachine: (ha-735960-m03) DBG | I0701 12:26:23.535277  654164 retry.go:31] will retry after 1.820829305s: waiting for machine to come up
	I0701 12:26:25.357271  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:25.357677  653531 main.go:141] libmachine: (ha-735960-m03) DBG | unable to find current IP address of domain ha-735960-m03 in network mk-ha-735960
	I0701 12:26:25.357701  653531 main.go:141] libmachine: (ha-735960-m03) DBG | I0701 12:26:25.357630  654164 retry.go:31] will retry after 1.816274117s: waiting for machine to come up
	I0701 12:26:27.176155  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:27.176621  653531 main.go:141] libmachine: (ha-735960-m03) DBG | unable to find current IP address of domain ha-735960-m03 in network mk-ha-735960
	I0701 12:26:27.176653  653531 main.go:141] libmachine: (ha-735960-m03) DBG | I0701 12:26:27.176598  654164 retry.go:31] will retry after 2.782602178s: waiting for machine to come up
	I0701 12:26:29.960991  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:29.961388  653531 main.go:141] libmachine: (ha-735960-m03) DBG | unable to find current IP address of domain ha-735960-m03 in network mk-ha-735960
	I0701 12:26:29.961421  653531 main.go:141] libmachine: (ha-735960-m03) DBG | I0701 12:26:29.961334  654164 retry.go:31] will retry after 3.816886888s: waiting for machine to come up
	I0701 12:26:33.779810  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:33.780404  653531 main.go:141] libmachine: (ha-735960-m03) Found IP for machine: 192.168.39.97
	I0701 12:26:33.780436  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has current primary IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:33.780448  653531 main.go:141] libmachine: (ha-735960-m03) Reserving static IP address...
	I0701 12:26:33.780953  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "ha-735960-m03", mac: "52:54:00:93:88:f2", ip: "192.168.39.97"} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:33.780975  653531 main.go:141] libmachine: (ha-735960-m03) DBG | skip adding static IP to network mk-ha-735960 - found existing host DHCP lease matching {name: "ha-735960-m03", mac: "52:54:00:93:88:f2", ip: "192.168.39.97"}
	I0701 12:26:33.780986  653531 main.go:141] libmachine: (ha-735960-m03) Reserved static IP address: 192.168.39.97
	I0701 12:26:33.780995  653531 main.go:141] libmachine: (ha-735960-m03) Waiting for SSH to be available...
	I0701 12:26:33.781005  653531 main.go:141] libmachine: (ha-735960-m03) DBG | Getting to WaitForSSH function...
	I0701 12:26:33.783239  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:33.783609  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:33.783636  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:33.783742  653531 main.go:141] libmachine: (ha-735960-m03) DBG | Using SSH client type: external
	I0701 12:26:33.783770  653531 main.go:141] libmachine: (ha-735960-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m03/id_rsa (-rw-------)
	I0701 12:26:33.783810  653531 main.go:141] libmachine: (ha-735960-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.97 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0701 12:26:33.783825  653531 main.go:141] libmachine: (ha-735960-m03) DBG | About to run SSH command:
	I0701 12:26:33.783839  653531 main.go:141] libmachine: (ha-735960-m03) DBG | exit 0
	I0701 12:26:33.906528  653531 main.go:141] libmachine: (ha-735960-m03) DBG | SSH cmd err, output: <nil>: 
	I0701 12:26:33.906854  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetConfigRaw
	I0701 12:26:33.907659  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetIP
	I0701 12:26:33.910504  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:33.910919  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:33.910952  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:33.911199  653531 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
	I0701 12:26:33.911468  653531 machine.go:94] provisionDockerMachine start ...
	I0701 12:26:33.911493  653531 main.go:141] libmachine: (ha-735960-m03) Calling .DriverName
	I0701 12:26:33.911726  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHHostname
	I0701 12:26:33.913742  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:33.914049  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:33.914079  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:33.914213  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHPort
	I0701 12:26:33.914440  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:33.914614  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:33.914781  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHUsername
	I0701 12:26:33.914952  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:26:33.915169  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0701 12:26:33.915186  653531 main.go:141] libmachine: About to run SSH command:
	hostname
	I0701 12:26:34.022720  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0701 12:26:34.022751  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetMachineName
	I0701 12:26:34.023048  653531 buildroot.go:166] provisioning hostname "ha-735960-m03"
	I0701 12:26:34.023086  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetMachineName
	I0701 12:26:34.023302  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHHostname
	I0701 12:26:34.026253  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:34.026699  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:34.026731  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:34.026891  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHPort
	I0701 12:26:34.027100  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:34.027330  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:34.027468  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHUsername
	I0701 12:26:34.027637  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:26:34.027853  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0701 12:26:34.027872  653531 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-735960-m03 && echo "ha-735960-m03" | sudo tee /etc/hostname
	I0701 12:26:34.143884  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-735960-m03
	
	I0701 12:26:34.143919  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHHostname
	I0701 12:26:34.146876  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:34.147233  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:34.147259  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:34.147410  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHPort
	I0701 12:26:34.147595  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:34.147764  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:34.147906  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHUsername
	I0701 12:26:34.148107  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:26:34.148271  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0701 12:26:34.148287  653531 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-735960-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-735960-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-735960-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0701 12:26:34.259290  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0701 12:26:34.259326  653531 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19166-630650/.minikube CaCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19166-630650/.minikube}
	I0701 12:26:34.259348  653531 buildroot.go:174] setting up certificates
	I0701 12:26:34.259361  653531 provision.go:84] configureAuth start
	I0701 12:26:34.259373  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetMachineName
	I0701 12:26:34.259700  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetIP
	I0701 12:26:34.262660  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:34.263056  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:34.263088  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:34.263229  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHHostname
	I0701 12:26:34.265709  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:34.266104  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:34.266129  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:34.266291  653531 provision.go:143] copyHostCerts
	I0701 12:26:34.266320  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem
	I0701 12:26:34.266385  653531 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem, removing ...
	I0701 12:26:34.266399  653531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem
	I0701 12:26:34.266510  653531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem (1078 bytes)
	I0701 12:26:34.266616  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem
	I0701 12:26:34.266642  653531 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem, removing ...
	I0701 12:26:34.266651  653531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem
	I0701 12:26:34.266687  653531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem (1123 bytes)
	I0701 12:26:34.266758  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem
	I0701 12:26:34.266785  653531 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem, removing ...
	I0701 12:26:34.266794  653531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem
	I0701 12:26:34.266828  653531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem (1675 bytes)
	I0701 12:26:34.266895  653531 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem org=jenkins.ha-735960-m03 san=[127.0.0.1 192.168.39.97 ha-735960-m03 localhost minikube]
	I0701 12:26:34.565581  653531 provision.go:177] copyRemoteCerts
	I0701 12:26:34.565649  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0701 12:26:34.565676  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHHostname
	I0701 12:26:34.568539  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:34.568839  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:34.568870  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:34.569025  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHPort
	I0701 12:26:34.569261  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:34.569428  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHUsername
	I0701 12:26:34.569588  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m03/id_rsa Username:docker}
	I0701 12:26:34.652136  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0701 12:26:34.652230  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0701 12:26:34.676227  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0701 12:26:34.676305  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0701 12:26:34.699234  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0701 12:26:34.699313  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0701 12:26:34.721885  653531 provision.go:87] duration metric: took 462.509686ms to configureAuth
	I0701 12:26:34.721915  653531 buildroot.go:189] setting minikube options for container-runtime
	I0701 12:26:34.722137  653531 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:26:34.722181  653531 main.go:141] libmachine: (ha-735960-m03) Calling .DriverName
	I0701 12:26:34.722494  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHHostname
	I0701 12:26:34.725227  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:34.725601  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:34.725629  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:34.725789  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHPort
	I0701 12:26:34.725994  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:34.726175  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:34.726384  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHUsername
	I0701 12:26:34.726572  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:26:34.726794  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0701 12:26:34.726809  653531 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0701 12:26:34.831674  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0701 12:26:34.831699  653531 buildroot.go:70] root file system type: tmpfs
	I0701 12:26:34.831846  653531 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0701 12:26:34.831923  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHHostname
	I0701 12:26:34.835107  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:34.835603  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:34.835626  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:34.835928  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHPort
	I0701 12:26:34.836184  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:34.836401  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:34.836577  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHUsername
	I0701 12:26:34.836754  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:26:34.836963  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0701 12:26:34.837056  653531 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.39.16"
	Environment="NO_PROXY=192.168.39.16,192.168.39.86"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0701 12:26:34.951789  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.39.16
	Environment=NO_PROXY=192.168.39.16,192.168.39.86
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0701 12:26:34.951830  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHHostname
	I0701 12:26:34.954854  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:34.955349  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:34.955376  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:34.955552  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHPort
	I0701 12:26:34.955761  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:34.955952  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:34.956104  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHUsername
	I0701 12:26:34.956269  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:26:34.956436  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0701 12:26:34.956451  653531 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0701 12:26:36.820196  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0701 12:26:36.820235  653531 machine.go:97] duration metric: took 2.908749821s to provisionDockerMachine
	I0701 12:26:36.820254  653531 start.go:293] postStartSetup for "ha-735960-m03" (driver="kvm2")
	I0701 12:26:36.820269  653531 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0701 12:26:36.820322  653531 main.go:141] libmachine: (ha-735960-m03) Calling .DriverName
	I0701 12:26:36.820717  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0701 12:26:36.820758  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHHostname
	I0701 12:26:36.823679  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:36.824131  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:36.824158  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:36.824315  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHPort
	I0701 12:26:36.824557  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:36.824862  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHUsername
	I0701 12:26:36.825025  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m03/id_rsa Username:docker}
	I0701 12:26:36.909262  653531 ssh_runner.go:195] Run: cat /etc/os-release
	I0701 12:26:36.913798  653531 info.go:137] Remote host: Buildroot 2023.02.9
	I0701 12:26:36.913830  653531 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-630650/.minikube/addons for local assets ...
	I0701 12:26:36.913904  653531 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-630650/.minikube/files for local assets ...
	I0701 12:26:36.913973  653531 filesync.go:149] local asset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> 6378542.pem in /etc/ssl/certs
	I0701 12:26:36.913983  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> /etc/ssl/certs/6378542.pem
	I0701 12:26:36.914063  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0701 12:26:36.924147  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem --> /etc/ssl/certs/6378542.pem (1708 bytes)
	I0701 12:26:36.949103  653531 start.go:296] duration metric: took 128.830664ms for postStartSetup
	I0701 12:26:36.949169  653531 main.go:141] libmachine: (ha-735960-m03) Calling .DriverName
	I0701 12:26:36.949541  653531 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0701 12:26:36.949572  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHHostname
	I0701 12:26:36.952321  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:36.952670  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:36.952703  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:36.952895  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHPort
	I0701 12:26:36.953116  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:36.953299  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHUsername
	I0701 12:26:36.953494  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m03/id_rsa Username:docker}
	I0701 12:26:37.037086  653531 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0701 12:26:37.037223  653531 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0701 12:26:37.097170  653531 fix.go:56] duration metric: took 21.511363009s for fixHost
	I0701 12:26:37.097229  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHHostname
	I0701 12:26:37.100519  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:37.100936  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:37.100988  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:37.101235  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHPort
	I0701 12:26:37.101494  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:37.101681  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:37.101864  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHUsername
	I0701 12:26:37.102058  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:26:37.102248  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0701 12:26:37.102261  653531 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0701 12:26:37.210872  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719836797.190240924
	
	I0701 12:26:37.210897  653531 fix.go:216] guest clock: 1719836797.190240924
	I0701 12:26:37.210906  653531 fix.go:229] Guest: 2024-07-01 12:26:37.190240924 +0000 UTC Remote: 2024-07-01 12:26:37.09720405 +0000 UTC m=+154.567055715 (delta=93.036874ms)
	I0701 12:26:37.210928  653531 fix.go:200] guest clock delta is within tolerance: 93.036874ms
	I0701 12:26:37.210935  653531 start.go:83] releasing machines lock for "ha-735960-m03", held for 21.625157566s
	I0701 12:26:37.210966  653531 main.go:141] libmachine: (ha-735960-m03) Calling .DriverName
	I0701 12:26:37.211304  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetIP
	I0701 12:26:37.213807  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:37.214222  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:37.214255  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:37.216716  653531 out.go:177] * Found network options:
	I0701 12:26:37.218305  653531 out.go:177]   - NO_PROXY=192.168.39.16,192.168.39.86
	W0701 12:26:37.219816  653531 proxy.go:119] fail to check proxy env: Error ip not in block
	W0701 12:26:37.219845  653531 proxy.go:119] fail to check proxy env: Error ip not in block
	I0701 12:26:37.219865  653531 main.go:141] libmachine: (ha-735960-m03) Calling .DriverName
	I0701 12:26:37.220522  653531 main.go:141] libmachine: (ha-735960-m03) Calling .DriverName
	I0701 12:26:37.220737  653531 main.go:141] libmachine: (ha-735960-m03) Calling .DriverName
	I0701 12:26:37.220844  653531 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0701 12:26:37.220887  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHHostname
	W0701 12:26:37.220953  653531 proxy.go:119] fail to check proxy env: Error ip not in block
	W0701 12:26:37.220981  653531 proxy.go:119] fail to check proxy env: Error ip not in block
	I0701 12:26:37.221057  653531 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0701 12:26:37.221077  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHHostname
	I0701 12:26:37.223616  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:37.223976  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:37.224003  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:37.224022  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:37.224163  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHPort
	I0701 12:26:37.224349  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:37.224476  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:37.224495  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:37.224522  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHUsername
	I0701 12:26:37.224684  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHPort
	I0701 12:26:37.224708  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m03/id_rsa Username:docker}
	I0701 12:26:37.224822  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:37.224957  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHUsername
	I0701 12:26:37.225089  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m03/id_rsa Username:docker}
	W0701 12:26:37.324512  653531 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0701 12:26:37.324590  653531 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0701 12:26:37.342354  653531 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0701 12:26:37.342401  653531 start.go:494] detecting cgroup driver to use...
	I0701 12:26:37.342553  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 12:26:37.361964  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0701 12:26:37.372356  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0701 12:26:37.382741  653531 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0701 12:26:37.382800  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0701 12:26:37.393672  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0701 12:26:37.404182  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0701 12:26:37.413967  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0701 12:26:37.425102  653531 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0701 12:26:37.436486  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0701 12:26:37.448119  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0701 12:26:37.459499  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0701 12:26:37.470904  653531 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0701 12:26:37.480202  653531 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0701 12:26:37.489935  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:26:37.612275  653531 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0701 12:26:37.635575  653531 start.go:494] detecting cgroup driver to use...
	I0701 12:26:37.635692  653531 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0701 12:26:37.653571  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 12:26:37.670438  653531 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0701 12:26:37.688000  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 12:26:37.705115  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 12:26:37.718914  653531 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0701 12:26:37.744858  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 12:26:37.759980  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 12:26:37.779721  653531 ssh_runner.go:195] Run: which cri-dockerd
	I0701 12:26:37.783771  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0701 12:26:37.794141  653531 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0701 12:26:37.811510  653531 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0701 12:26:37.931976  653531 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0701 12:26:38.066164  653531 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0701 12:26:38.066230  653531 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0701 12:26:38.083572  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:26:38.206358  653531 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0701 12:26:40.648995  653531 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.442581628s)
	I0701 12:26:40.649094  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0701 12:26:40.663523  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0701 12:26:40.678231  653531 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0701 12:26:40.794839  653531 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0701 12:26:40.936707  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:26:41.068605  653531 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0701 12:26:41.086480  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0701 12:26:41.102238  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:26:41.225877  653531 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0701 12:26:41.309074  653531 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0701 12:26:41.309144  653531 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0701 12:26:41.314764  653531 start.go:562] Will wait 60s for crictl version
	I0701 12:26:41.314839  653531 ssh_runner.go:195] Run: which crictl
	I0701 12:26:41.318792  653531 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0701 12:26:41.356836  653531 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.1
	RuntimeApiVersion:  v1
	I0701 12:26:41.356927  653531 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0701 12:26:41.383790  653531 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0701 12:26:41.409143  653531 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.1 ...
	I0701 12:26:41.410603  653531 out.go:177]   - env NO_PROXY=192.168.39.16
	I0701 12:26:41.412215  653531 out.go:177]   - env NO_PROXY=192.168.39.16,192.168.39.86
	I0701 12:26:41.413404  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetIP
	I0701 12:26:41.416274  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:41.416763  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:41.416796  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:41.417070  653531 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0701 12:26:41.421392  653531 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 12:26:41.434549  653531 mustload.go:65] Loading cluster: ha-735960
	I0701 12:26:41.434797  653531 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:26:41.435079  653531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:26:41.435129  653531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:26:41.451156  653531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45677
	I0701 12:26:41.451676  653531 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:26:41.452212  653531 main.go:141] libmachine: Using API Version  1
	I0701 12:26:41.452237  653531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:26:41.452614  653531 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:26:41.452827  653531 main.go:141] libmachine: (ha-735960) Calling .GetState
	I0701 12:26:41.454575  653531 host.go:66] Checking if "ha-735960" exists ...
	I0701 12:26:41.454891  653531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:26:41.454938  653531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:26:41.471129  653531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33243
	I0701 12:26:41.471681  653531 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:26:41.472198  653531 main.go:141] libmachine: Using API Version  1
	I0701 12:26:41.472222  653531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:26:41.472612  653531 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:26:41.472844  653531 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:26:41.473032  653531 certs.go:68] Setting up /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960 for IP: 192.168.39.97
	I0701 12:26:41.473049  653531 certs.go:194] generating shared ca certs ...
	I0701 12:26:41.473074  653531 certs.go:226] acquiring lock for ca certs: {Name:mk34e166bfd069e523b2325e14d1812c523bff53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:26:41.473230  653531 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key
	I0701 12:26:41.473268  653531 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key
	I0701 12:26:41.473278  653531 certs.go:256] generating profile certs ...
	I0701 12:26:41.473349  653531 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/client.key
	I0701 12:26:41.473405  653531 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key.f1482ab5
	I0701 12:26:41.473453  653531 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.key
	I0701 12:26:41.473465  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0701 12:26:41.473478  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0701 12:26:41.473490  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0701 12:26:41.473503  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0701 12:26:41.473514  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0701 12:26:41.473528  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0701 12:26:41.473537  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0701 12:26:41.473548  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0701 12:26:41.473603  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem (1338 bytes)
	W0701 12:26:41.473630  653531 certs.go:480] ignoring /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854_empty.pem, impossibly tiny 0 bytes
	I0701 12:26:41.473639  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem (1675 bytes)
	I0701 12:26:41.473659  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem (1078 bytes)
	I0701 12:26:41.473680  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem (1123 bytes)
	I0701 12:26:41.473702  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem (1675 bytes)
	I0701 12:26:41.473736  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem (1708 bytes)
	I0701 12:26:41.473759  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem -> /usr/share/ca-certificates/637854.pem
	I0701 12:26:41.473772  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> /usr/share/ca-certificates/6378542.pem
	I0701 12:26:41.473784  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:26:41.494518  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:26:41.498371  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:26:41.498974  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:26:41.499011  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:26:41.499158  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:26:41.499416  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:26:41.499610  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:26:41.499835  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa Username:docker}
	I0701 12:26:41.570757  653531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0701 12:26:41.575932  653531 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0701 12:26:41.587511  653531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0701 12:26:41.591633  653531 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0701 12:26:41.604961  653531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0701 12:26:41.609152  653531 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0701 12:26:41.619653  653531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0701 12:26:41.623572  653531 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0701 12:26:41.634171  653531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0701 12:26:41.638176  653531 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0701 12:26:41.654120  653531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0701 12:26:41.659095  653531 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0701 12:26:41.671865  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0701 12:26:41.701740  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0701 12:26:41.726445  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0701 12:26:41.751925  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0701 12:26:41.776782  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0701 12:26:41.801611  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0701 12:26:41.825786  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0701 12:26:41.849992  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0701 12:26:41.873760  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem --> /usr/share/ca-certificates/637854.pem (1338 bytes)
	I0701 12:26:41.898685  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem --> /usr/share/ca-certificates/6378542.pem (1708 bytes)
	I0701 12:26:41.923397  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0701 12:26:41.948251  653531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0701 12:26:41.965919  653531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0701 12:26:41.982966  653531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0701 12:26:42.001626  653531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0701 12:26:42.019386  653531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0701 12:26:42.036382  653531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0701 12:26:42.053238  653531 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0701 12:26:42.070881  653531 ssh_runner.go:195] Run: openssl version
	I0701 12:26:42.076651  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/637854.pem && ln -fs /usr/share/ca-certificates/637854.pem /etc/ssl/certs/637854.pem"
	I0701 12:26:42.087389  653531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/637854.pem
	I0701 12:26:42.093055  653531 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  1 12:11 /usr/share/ca-certificates/637854.pem
	I0701 12:26:42.093154  653531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/637854.pem
	I0701 12:26:42.099823  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/637854.pem /etc/ssl/certs/51391683.0"
	I0701 12:26:42.111701  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6378542.pem && ln -fs /usr/share/ca-certificates/6378542.pem /etc/ssl/certs/6378542.pem"
	I0701 12:26:42.125593  653531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6378542.pem
	I0701 12:26:42.130163  653531 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  1 12:11 /usr/share/ca-certificates/6378542.pem
	I0701 12:26:42.130246  653531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6378542.pem
	I0701 12:26:42.136102  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6378542.pem /etc/ssl/certs/3ec20f2e.0"
	I0701 12:26:42.147064  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0701 12:26:42.159086  653531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:26:42.163767  653531 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  1 12:05 /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:26:42.163864  653531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:26:42.170462  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0701 12:26:42.181119  653531 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0701 12:26:42.185711  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0701 12:26:42.191736  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0701 12:26:42.198232  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0701 12:26:42.204698  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0701 12:26:42.210909  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0701 12:26:42.216837  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0701 12:26:42.222755  653531 kubeadm.go:928] updating node {m03 192.168.39.97 8443 v1.30.2 docker true true} ...
	I0701 12:26:42.222878  653531 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-735960-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.97
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-735960 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0701 12:26:42.222906  653531 kube-vip.go:115] generating kube-vip config ...
	I0701 12:26:42.222955  653531 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0701 12:26:42.237298  653531 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0701 12:26:42.237376  653531 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0701 12:26:42.237455  653531 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0701 12:26:42.247439  653531 binaries.go:44] Found k8s binaries, skipping transfer
	I0701 12:26:42.247515  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0701 12:26:42.257290  653531 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0701 12:26:42.274152  653531 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0701 12:26:42.290241  653531 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0701 12:26:42.308095  653531 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0701 12:26:42.312034  653531 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 12:26:42.325214  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:26:42.447612  653531 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0701 12:26:42.465983  653531 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:26:42.466298  653531 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:26:42.468248  653531 out.go:177] * Verifying Kubernetes components...
	I0701 12:26:42.469706  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:26:42.625060  653531 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0701 12:26:42.647149  653531 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19166-630650/kubeconfig
	I0701 12:26:42.647532  653531 kapi.go:59] client config for ha-735960: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/client.crt", KeyFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/client.key", CAFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfbb80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0701 12:26:42.647632  653531 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.16:8443
	I0701 12:26:42.647948  653531 node_ready.go:35] waiting up to 6m0s for node "ha-735960-m03" to be "Ready" ...
	I0701 12:26:42.648043  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:42.648055  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:42.648066  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:42.648079  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:42.652553  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:26:43.148887  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:43.148913  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:43.148924  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:43.148931  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:43.152504  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:43.153020  653531 node_ready.go:49] node "ha-735960-m03" has status "Ready":"True"
	I0701 12:26:43.153041  653531 node_ready.go:38] duration metric: took 505.070913ms for node "ha-735960-m03" to be "Ready" ...
	I0701 12:26:43.153051  653531 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 12:26:43.153132  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods
	I0701 12:26:43.153144  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:43.153154  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:43.153161  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:43.159789  653531 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0701 12:26:43.167076  653531 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:43.167158  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:26:43.167167  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:43.167175  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:43.167179  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:43.169757  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:43.170310  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:26:43.170347  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:43.170357  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:43.170362  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:43.173097  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:43.173879  653531 pod_ready.go:92] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"True"
	I0701 12:26:43.173897  653531 pod_ready.go:81] duration metric: took 6.79477ms for pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:43.173905  653531 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-p4rtz" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:43.173970  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-p4rtz
	I0701 12:26:43.173977  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:43.173984  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:43.173987  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:43.176719  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:43.177389  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:26:43.177403  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:43.177410  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:43.177415  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:43.180272  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:43.180876  653531 pod_ready.go:92] pod "coredns-7db6d8ff4d-p4rtz" in "kube-system" namespace has status "Ready":"True"
	I0701 12:26:43.180892  653531 pod_ready.go:81] duration metric: took 6.981686ms for pod "coredns-7db6d8ff4d-p4rtz" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:43.180901  653531 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:43.180946  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960
	I0701 12:26:43.180953  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:43.180959  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:43.180963  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:43.183979  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:43.184715  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:26:43.184733  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:43.184744  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:43.184750  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:43.187303  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:43.187727  653531 pod_ready.go:92] pod "etcd-ha-735960" in "kube-system" namespace has status "Ready":"True"
	I0701 12:26:43.187743  653531 pod_ready.go:81] duration metric: took 6.837753ms for pod "etcd-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:43.187751  653531 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:43.187803  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m02
	I0701 12:26:43.187810  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:43.187816  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:43.187820  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:43.190206  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:43.190728  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:26:43.190744  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:43.190753  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:43.190761  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:43.193433  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:43.194190  653531 pod_ready.go:92] pod "etcd-ha-735960-m02" in "kube-system" namespace has status "Ready":"True"
	I0701 12:26:43.194207  653531 pod_ready.go:81] duration metric: took 6.448739ms for pod "etcd-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:43.194216  653531 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:43.349638  653531 request.go:629] Waited for 155.349ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:43.349754  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:43.349767  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:43.349778  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:43.349790  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:43.354862  653531 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0701 12:26:43.548911  653531 request.go:629] Waited for 193.270032ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:43.548983  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:43.549014  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:43.549029  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:43.549034  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:43.554047  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:26:43.749322  653531 request.go:629] Waited for 54.224497ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:43.749397  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:43.749405  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:43.749423  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:43.749433  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:43.753610  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:26:43.949318  653531 request.go:629] Waited for 194.40537ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:43.949442  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:43.949455  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:43.949466  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:43.949475  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:43.953476  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:44.195013  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:44.195041  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:44.195053  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:44.195058  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:44.198623  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:44.349775  653531 request.go:629] Waited for 150.337133ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:44.349881  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:44.349890  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:44.349901  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:44.349909  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:44.354832  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:26:44.694539  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:44.694560  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:44.694569  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:44.694573  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:44.698072  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:44.749262  653531 request.go:629] Waited for 50.212385ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:44.749342  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:44.749357  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:44.749376  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:44.749400  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:44.759594  653531 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0701 12:26:45.194608  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:45.194639  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:45.194651  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:45.194656  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:45.198135  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:45.199157  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:45.199178  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:45.199187  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:45.199193  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:45.201747  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:45.202475  653531 pod_ready.go:102] pod "etcd-ha-735960-m03" in "kube-system" namespace has status "Ready":"False"
	I0701 12:26:45.695358  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:45.695387  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:45.695398  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:45.695405  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:45.698583  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:45.699570  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:45.699591  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:45.699603  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:45.699611  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:45.702299  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:46.195334  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:46.195357  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:46.195366  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:46.195369  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:46.199158  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:46.200116  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:46.200134  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:46.200146  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:46.200153  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:46.203740  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:46.695210  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:46.695238  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:46.695250  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:46.695257  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:46.698972  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:46.699688  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:46.699709  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:46.699722  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:46.699728  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:46.703576  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:47.194463  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:47.194494  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:47.194504  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:47.194512  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:47.197423  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:47.198125  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:47.198144  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:47.198156  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:47.198166  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:47.201172  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:47.695417  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:47.695446  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:47.695457  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:47.695463  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:47.698528  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:47.699400  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:47.699424  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:47.699435  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:47.699440  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:47.702619  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:47.703202  653531 pod_ready.go:102] pod "etcd-ha-735960-m03" in "kube-system" namespace has status "Ready":"False"
	I0701 12:26:48.194609  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:48.194632  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:48.194640  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:48.194656  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:48.197877  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:48.198784  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:48.198804  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:48.198815  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:48.198819  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:48.201611  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:48.694433  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:48.694459  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:48.694471  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:48.694478  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:48.697539  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:48.698170  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:48.698185  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:48.698193  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:48.698196  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:48.700886  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:49.194905  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:49.194931  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:49.194942  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:49.194954  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:49.199572  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:26:49.200541  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:49.200560  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:49.200570  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:49.200575  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:49.204090  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:49.694531  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:49.694551  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:49.694559  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:49.694563  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:49.698105  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:49.699044  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:49.699062  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:49.699073  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:49.699078  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:49.701617  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:50.195294  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:50.195322  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:50.195333  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:50.195338  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:50.198820  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:50.199561  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:50.199579  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:50.199588  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:50.199594  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:50.202455  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:50.203029  653531 pod_ready.go:102] pod "etcd-ha-735960-m03" in "kube-system" namespace has status "Ready":"False"
	I0701 12:26:50.694678  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:50.694700  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:50.694708  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:50.694712  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:50.697694  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:50.698383  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:50.698401  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:50.698409  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:50.698413  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:50.701398  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:51.195484  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:51.195522  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:51.195535  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:51.195539  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:51.199113  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:51.199788  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:51.199804  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:51.199811  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:51.199815  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:51.202679  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:51.695276  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:51.695304  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:51.695318  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:51.695325  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:51.698725  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:51.699425  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:51.699444  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:51.699454  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:51.699461  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:51.702960  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:52.195136  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:52.195168  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:52.195178  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:52.195182  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:52.198421  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:52.199068  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:52.199081  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:52.199089  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:52.199133  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:52.201737  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:52.695128  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:52.695153  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:52.695161  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:52.695165  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:52.698791  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:52.699625  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:52.699640  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:52.699647  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:52.699666  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:52.702284  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:52.702827  653531 pod_ready.go:102] pod "etcd-ha-735960-m03" in "kube-system" namespace has status "Ready":"False"
	I0701 12:26:53.194518  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:53.194542  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:53.194550  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:53.194555  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:53.197969  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:53.198583  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:53.198602  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:53.198610  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:53.198615  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:53.201376  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:53.695296  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:53.695318  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:53.695326  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:53.695331  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:53.699078  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:53.699884  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:53.699910  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:53.699922  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:53.699929  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:53.703186  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:54.195014  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:54.195043  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:54.195054  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:54.195058  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:54.199057  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:54.199733  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:54.199750  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:54.199758  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:54.199763  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:54.202961  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:54.695177  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:54.695212  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:54.695225  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:54.695233  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:54.698371  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:54.699201  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:54.699216  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:54.699224  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:54.699227  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:54.702002  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:55.194543  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:55.194566  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:55.194574  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:55.194579  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:55.198201  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:55.198814  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:55.198832  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:55.198839  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:55.198843  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:55.201469  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:55.201993  653531 pod_ready.go:102] pod "etcd-ha-735960-m03" in "kube-system" namespace has status "Ready":"False"
	I0701 12:26:55.694950  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:55.694972  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:55.694983  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:55.694990  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:55.698498  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:55.699087  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:55.699101  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:55.699108  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:55.699112  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:55.701817  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:56.194521  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:56.194544  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:56.194552  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:56.194557  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:56.197837  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:56.198482  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:56.198499  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:56.198505  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:56.198509  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:56.201147  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:56.201653  653531 pod_ready.go:92] pod "etcd-ha-735960-m03" in "kube-system" namespace has status "Ready":"True"
	I0701 12:26:56.201674  653531 pod_ready.go:81] duration metric: took 13.007452083s for pod "etcd-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:56.201692  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:56.201750  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960
	I0701 12:26:56.201757  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:56.201764  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:56.201770  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:56.204418  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:56.205132  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:26:56.205148  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:56.205154  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:56.205158  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:56.207485  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:56.207887  653531 pod_ready.go:92] pod "kube-apiserver-ha-735960" in "kube-system" namespace has status "Ready":"True"
	I0701 12:26:56.207907  653531 pod_ready.go:81] duration metric: took 6.206212ms for pod "kube-apiserver-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:56.207916  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:56.207971  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m02
	I0701 12:26:56.207981  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:56.207988  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:56.207992  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:56.210274  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:56.210769  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:26:56.210784  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:56.210791  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:56.210795  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:56.213307  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:56.213730  653531 pod_ready.go:92] pod "kube-apiserver-ha-735960-m02" in "kube-system" namespace has status "Ready":"True"
	I0701 12:26:56.213745  653531 pod_ready.go:81] duration metric: took 5.823695ms for pod "kube-apiserver-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:56.213752  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:56.213799  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:26:56.213806  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:56.213813  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:56.213817  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:56.221893  653531 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0701 12:26:56.222630  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:56.222650  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:56.222661  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:56.222665  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:56.225298  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:56.714434  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:26:56.714457  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:56.714466  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:56.714473  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:56.717715  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:56.718387  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:56.718404  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:56.718414  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:56.718420  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:56.721172  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:57.213955  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:26:57.213979  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:57.213987  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:57.213992  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:57.217394  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:57.218050  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:57.218071  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:57.218082  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:57.218088  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:57.221478  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:57.714757  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:26:57.714779  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:57.714787  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:57.714792  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:57.717911  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:57.718695  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:57.718720  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:57.718734  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:57.718740  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:57.721551  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:58.214582  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:26:58.214605  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:58.214613  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:58.214616  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:58.218396  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:58.219147  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:58.219167  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:58.219174  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:58.219178  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:58.221830  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:58.222386  653531 pod_ready.go:102] pod "kube-apiserver-ha-735960-m03" in "kube-system" namespace has status "Ready":"False"
	I0701 12:26:58.714864  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:26:58.714890  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:58.714901  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:58.714906  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:58.718181  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:58.718855  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:58.718874  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:58.718881  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:58.718885  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:58.722484  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:59.214439  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:26:59.214472  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:59.214484  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:59.214491  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:59.217758  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:59.218712  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:59.218732  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:59.218738  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:59.218742  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:59.221527  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:59.713995  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:26:59.714020  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:59.714028  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:59.714033  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:59.717121  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:59.717838  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:59.717855  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:59.717862  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:59.717866  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:59.720568  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:27:00.214542  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:27:00.214568  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:00.214578  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:00.214583  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:00.218220  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:00.218919  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:00.218938  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:00.218947  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:00.218954  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:00.222119  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:00.223039  653531 pod_ready.go:102] pod "kube-apiserver-ha-735960-m03" in "kube-system" namespace has status "Ready":"False"
	I0701 12:27:00.714993  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:27:00.715015  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:00.715023  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:00.715027  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:00.718022  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:27:00.718871  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:00.718894  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:00.718905  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:00.718910  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:00.721660  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:27:01.214293  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:27:01.214320  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:01.214345  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:01.214354  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:01.217660  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:01.218619  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:01.218636  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:01.218645  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:01.218649  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:01.221248  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:27:01.714569  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:27:01.714593  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:01.714602  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:01.714607  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:01.717986  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:01.718877  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:01.718900  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:01.718912  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:01.718917  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:01.722103  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:02.213928  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:27:02.213953  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:02.213961  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:02.213965  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:02.217318  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:02.218078  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:02.218093  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:02.218099  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:02.218102  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:02.221493  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:02.714825  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:27:02.714849  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:02.714857  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:02.714862  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:02.718359  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:02.719162  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:02.719180  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:02.719188  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:02.719193  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:02.722363  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:02.723005  653531 pod_ready.go:92] pod "kube-apiserver-ha-735960-m03" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:02.723029  653531 pod_ready.go:81] duration metric: took 6.509269845s for pod "kube-apiserver-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:02.723044  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:02.723152  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-735960
	I0701 12:27:02.723163  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:02.723174  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:02.723186  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:02.726502  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:02.727250  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:27:02.727266  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:02.727277  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:02.727280  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:02.730522  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:02.731090  653531 pod_ready.go:92] pod "kube-controller-manager-ha-735960" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:02.731116  653531 pod_ready.go:81] duration metric: took 8.062099ms for pod "kube-controller-manager-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:02.731129  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:02.731206  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-735960-m02
	I0701 12:27:02.731216  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:02.731226  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:02.731232  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:02.734354  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:02.735350  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:27:02.735370  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:02.735378  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:02.735381  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:02.738250  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:27:02.739014  653531 pod_ready.go:92] pod "kube-controller-manager-ha-735960-m02" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:02.739035  653531 pod_ready.go:81] duration metric: took 7.898052ms for pod "kube-controller-manager-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:02.739045  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:02.739108  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-735960-m03
	I0701 12:27:02.739116  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:02.739125  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:02.739134  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:02.742376  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:02.743084  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:02.743106  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:02.743117  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:02.743121  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:02.746455  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:02.747046  653531 pod_ready.go:92] pod "kube-controller-manager-ha-735960-m03" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:02.747075  653531 pod_ready.go:81] duration metric: took 8.017741ms for pod "kube-controller-manager-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:02.747091  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-25ssf" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:02.747213  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-25ssf
	I0701 12:27:02.747226  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:02.747237  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:02.747242  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:02.750009  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:27:02.750887  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m04
	I0701 12:27:02.750910  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:02.750941  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:02.750947  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:02.753841  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:27:02.754410  653531 pod_ready.go:97] node "ha-735960-m04" hosting pod "kube-proxy-25ssf" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-735960-m04" has status "Ready":"Unknown"
	I0701 12:27:02.754439  653531 pod_ready.go:81] duration metric: took 7.336267ms for pod "kube-proxy-25ssf" in "kube-system" namespace to be "Ready" ...
	E0701 12:27:02.754453  653531 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-735960-m04" hosting pod "kube-proxy-25ssf" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-735960-m04" has status "Ready":"Unknown"
	I0701 12:27:02.754464  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-776rt" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:02.915931  653531 request.go:629] Waited for 161.334912ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-776rt
	I0701 12:27:02.916009  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-776rt
	I0701 12:27:02.916016  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:02.916026  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:02.916032  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:02.922578  653531 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0701 12:27:03.115563  653531 request.go:629] Waited for 192.243271ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:03.115665  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:03.115679  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:03.115693  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:03.115702  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:03.119673  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:03.120379  653531 pod_ready.go:92] pod "kube-proxy-776rt" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:03.120399  653531 pod_ready.go:81] duration metric: took 365.926734ms for pod "kube-proxy-776rt" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:03.120409  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b6knb" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:03.315515  653531 request.go:629] Waited for 195.003147ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b6knb
	I0701 12:27:03.315575  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b6knb
	I0701 12:27:03.315580  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:03.315588  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:03.315593  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:03.319367  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:03.515329  653531 request.go:629] Waited for 195.408895ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:27:03.515421  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:27:03.515429  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:03.515440  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:03.515452  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:03.518825  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:03.519611  653531 pod_ready.go:92] pod "kube-proxy-b6knb" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:03.519633  653531 pod_ready.go:81] duration metric: took 399.213433ms for pod "kube-proxy-b6knb" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:03.519642  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lphzn" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:03.715721  653531 request.go:629] Waited for 195.977677ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lphzn
	I0701 12:27:03.715811  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lphzn
	I0701 12:27:03.715820  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:03.715828  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:03.715833  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:03.720058  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:27:03.915338  653531 request.go:629] Waited for 194.486914ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:27:03.915438  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:27:03.915447  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:03.915455  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:03.915462  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:03.919143  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:03.919765  653531 pod_ready.go:92] pod "kube-proxy-lphzn" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:03.919789  653531 pod_ready.go:81] duration metric: took 400.14123ms for pod "kube-proxy-lphzn" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:03.919800  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:04.114907  653531 request.go:629] Waited for 195.032639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960
	I0701 12:27:04.114983  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960
	I0701 12:27:04.115004  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:04.115019  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:04.115027  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:04.119283  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:27:04.315128  653531 request.go:629] Waited for 195.065236ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:27:04.315231  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:27:04.315243  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:04.315255  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:04.315264  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:04.319107  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:04.319792  653531 pod_ready.go:92] pod "kube-scheduler-ha-735960" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:04.319821  653531 pod_ready.go:81] duration metric: took 400.011957ms for pod "kube-scheduler-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:04.319838  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:04.515786  653531 request.go:629] Waited for 195.848501ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960-m02
	I0701 12:27:04.515865  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960-m02
	I0701 12:27:04.515872  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:04.515885  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:04.515894  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:04.519607  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:04.715555  653531 request.go:629] Waited for 195.254305ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:27:04.715662  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:27:04.715673  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:04.715686  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:04.715696  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:04.718989  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:04.719533  653531 pod_ready.go:92] pod "kube-scheduler-ha-735960-m02" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:04.719555  653531 pod_ready.go:81] duration metric: took 399.709368ms for pod "kube-scheduler-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:04.719565  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:04.915742  653531 request.go:629] Waited for 196.076319ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960-m03
	I0701 12:27:04.915873  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960-m03
	I0701 12:27:04.915884  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:04.915892  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:04.915896  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:04.919910  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:05.114903  653531 request.go:629] Waited for 194.321141ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:05.114998  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:05.115010  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:05.115020  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:05.115029  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:05.118835  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:05.119325  653531 pod_ready.go:92] pod "kube-scheduler-ha-735960-m03" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:05.119348  653531 pod_ready.go:81] duration metric: took 399.776156ms for pod "kube-scheduler-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:05.119360  653531 pod_ready.go:38] duration metric: took 21.966297492s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 12:27:05.119380  653531 api_server.go:52] waiting for apiserver process to appear ...
	I0701 12:27:05.119446  653531 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 12:27:05.134970  653531 api_server.go:72] duration metric: took 22.668924734s to wait for apiserver process to appear ...
	I0701 12:27:05.135005  653531 api_server.go:88] waiting for apiserver healthz status ...
	I0701 12:27:05.135037  653531 api_server.go:253] Checking apiserver healthz at https://192.168.39.16:8443/healthz ...
	I0701 12:27:05.139924  653531 api_server.go:279] https://192.168.39.16:8443/healthz returned 200:
	ok
	I0701 12:27:05.140029  653531 round_trippers.go:463] GET https://192.168.39.16:8443/version
	I0701 12:27:05.140040  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:05.140052  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:05.140060  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:05.141045  653531 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0701 12:27:05.141124  653531 api_server.go:141] control plane version: v1.30.2
	I0701 12:27:05.141142  653531 api_server.go:131] duration metric: took 6.129152ms to wait for apiserver health ...
	I0701 12:27:05.141156  653531 system_pods.go:43] waiting for kube-system pods to appear ...
	I0701 12:27:05.315496  653531 request.go:629] Waited for 174.257848ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods
	I0701 12:27:05.315603  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods
	I0701 12:27:05.315615  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:05.315627  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:05.315640  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:05.331176  653531 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0701 12:27:05.341126  653531 system_pods.go:59] 26 kube-system pods found
	I0701 12:27:05.341168  653531 system_pods.go:61] "coredns-7db6d8ff4d-nk4lf" [c03dd635-a82d-4f18-bd72-ec575f91867e] Running
	I0701 12:27:05.341173  653531 system_pods.go:61] "coredns-7db6d8ff4d-p4rtz" [267efba7-bf34-48d5-ab15-5bda45ff2f4f] Running
	I0701 12:27:05.341177  653531 system_pods.go:61] "etcd-ha-735960" [4b98745c-292f-42b5-977c-69c50fd241f1] Running
	I0701 12:27:05.341181  653531 system_pods.go:61] "etcd-ha-735960-m02" [fed8cdfa-8428-47e0-84ef-05297ad232f8] Running
	I0701 12:27:05.341184  653531 system_pods.go:61] "etcd-ha-735960-m03" [50b07bc3-ff6b-487d-8654-901d96892868] Running
	I0701 12:27:05.341187  653531 system_pods.go:61] "kindnet-2424m" [aa18d5dd-f6eb-4f04-a61e-b0b257e214af] Running
	I0701 12:27:05.341190  653531 system_pods.go:61] "kindnet-6gx8s" [7f46a773-a075-476c-9e54-89f125b4b57a] Running
	I0701 12:27:05.341195  653531 system_pods.go:61] "kindnet-7f6hm" [a8c302b4-1163-4d4f-bfe3-4fd3b5d23cf0] Running
	I0701 12:27:05.341199  653531 system_pods.go:61] "kindnet-bztzv" [7afa0e45-3d10-40bc-b422-7005a3ca9d3a] Running
	I0701 12:27:05.341203  653531 system_pods.go:61] "kube-apiserver-ha-735960" [ad041aaa-465a-4d8a-a8dc-b7665e1d587d] Running
	I0701 12:27:05.341208  653531 system_pods.go:61] "kube-apiserver-ha-735960-m02" [ba28f48e-1c18-47e3-ab11-a9b5588c5c32] Running
	I0701 12:27:05.341213  653531 system_pods.go:61] "kube-apiserver-ha-735960-m03" [baafa3bf-78ee-4269-9591-b0440927e055] Running
	I0701 12:27:05.341218  653531 system_pods.go:61] "kube-controller-manager-ha-735960" [3f0f0cf5-329d-47bc-b922-7583902e2607] Running
	I0701 12:27:05.341222  653531 system_pods.go:61] "kube-controller-manager-ha-735960-m02" [258fde18-ac5c-4446-842b-9465529b154c] Running
	I0701 12:27:05.341231  653531 system_pods.go:61] "kube-controller-manager-ha-735960-m03" [79acc56b-a9e0-4d4b-bc64-1a3a36ddf051] Running
	I0701 12:27:05.341235  653531 system_pods.go:61] "kube-proxy-25ssf" [11f0dc23-ab9d-4d39-988d-4c44dfde86cd] Running
	I0701 12:27:05.341244  653531 system_pods.go:61] "kube-proxy-776rt" [5666dac7-924e-4429-bd1d-a1a5647cc611] Running
	I0701 12:27:05.341248  653531 system_pods.go:61] "kube-proxy-b6knb" [eb36e930-5799-4ff7-821a-ccb22303cd1b] Running
	I0701 12:27:05.341253  653531 system_pods.go:61] "kube-proxy-lphzn" [0761a7a6-740e-4cde-9ab5-e02e8d417907] Running
	I0701 12:27:05.341258  653531 system_pods.go:61] "kube-scheduler-ha-735960" [c624cf42-a7d6-4aaf-859d-1aeaf29f9acb] Running
	I0701 12:27:05.341266  653531 system_pods.go:61] "kube-scheduler-ha-735960-m02" [7de78af7-2d79-46dc-bd34-f221d79fde06] Running
	I0701 12:27:05.341276  653531 system_pods.go:61] "kube-scheduler-ha-735960-m03" [9f9a2030-9332-44af-b8dc-3b4609e53f91] Running
	I0701 12:27:05.341284  653531 system_pods.go:61] "kube-vip-ha-735960" [4299679a-c145-4f4f-8ec6-3cd468b98ef1] Running
	I0701 12:27:05.341289  653531 system_pods.go:61] "kube-vip-ha-735960-m02" [1c9b13e1-515c-43c0-8d99-5ad1c1807727] Running
	I0701 12:27:05.341296  653531 system_pods.go:61] "kube-vip-ha-735960-m03" [7069ea7c-5461-4fe6-a969-97fe33396ebb] Running
	I0701 12:27:05.341300  653531 system_pods.go:61] "storage-provisioner" [f5c4f7f9-d648-4019-a5ea-6ce59f6c5663] Running
	I0701 12:27:05.341308  653531 system_pods.go:74] duration metric: took 200.142768ms to wait for pod list to return data ...
	I0701 12:27:05.341319  653531 default_sa.go:34] waiting for default service account to be created ...
	I0701 12:27:05.515805  653531 request.go:629] Waited for 174.38988ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/default/serviceaccounts
	I0701 12:27:05.515869  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/default/serviceaccounts
	I0701 12:27:05.515874  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:05.515882  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:05.515886  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:05.519545  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:05.519680  653531 default_sa.go:45] found service account: "default"
	I0701 12:27:05.519701  653531 default_sa.go:55] duration metric: took 178.373792ms for default service account to be created ...
	I0701 12:27:05.519712  653531 system_pods.go:116] waiting for k8s-apps to be running ...
	I0701 12:27:05.715337  653531 request.go:629] Waited for 195.548539ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods
	I0701 12:27:05.715405  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods
	I0701 12:27:05.715411  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:05.715423  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:05.715431  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:05.722571  653531 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0701 12:27:05.729587  653531 system_pods.go:86] 26 kube-system pods found
	I0701 12:27:05.729628  653531 system_pods.go:89] "coredns-7db6d8ff4d-nk4lf" [c03dd635-a82d-4f18-bd72-ec575f91867e] Running
	I0701 12:27:05.729636  653531 system_pods.go:89] "coredns-7db6d8ff4d-p4rtz" [267efba7-bf34-48d5-ab15-5bda45ff2f4f] Running
	I0701 12:27:05.729642  653531 system_pods.go:89] "etcd-ha-735960" [4b98745c-292f-42b5-977c-69c50fd241f1] Running
	I0701 12:27:05.729649  653531 system_pods.go:89] "etcd-ha-735960-m02" [fed8cdfa-8428-47e0-84ef-05297ad232f8] Running
	I0701 12:27:05.729655  653531 system_pods.go:89] "etcd-ha-735960-m03" [50b07bc3-ff6b-487d-8654-901d96892868] Running
	I0701 12:27:05.729661  653531 system_pods.go:89] "kindnet-2424m" [aa18d5dd-f6eb-4f04-a61e-b0b257e214af] Running
	I0701 12:27:05.729666  653531 system_pods.go:89] "kindnet-6gx8s" [7f46a773-a075-476c-9e54-89f125b4b57a] Running
	I0701 12:27:05.729671  653531 system_pods.go:89] "kindnet-7f6hm" [a8c302b4-1163-4d4f-bfe3-4fd3b5d23cf0] Running
	I0701 12:27:05.729677  653531 system_pods.go:89] "kindnet-bztzv" [7afa0e45-3d10-40bc-b422-7005a3ca9d3a] Running
	I0701 12:27:05.729684  653531 system_pods.go:89] "kube-apiserver-ha-735960" [ad041aaa-465a-4d8a-a8dc-b7665e1d587d] Running
	I0701 12:27:05.729689  653531 system_pods.go:89] "kube-apiserver-ha-735960-m02" [ba28f48e-1c18-47e3-ab11-a9b5588c5c32] Running
	I0701 12:27:05.729695  653531 system_pods.go:89] "kube-apiserver-ha-735960-m03" [baafa3bf-78ee-4269-9591-b0440927e055] Running
	I0701 12:27:05.729702  653531 system_pods.go:89] "kube-controller-manager-ha-735960" [3f0f0cf5-329d-47bc-b922-7583902e2607] Running
	I0701 12:27:05.729710  653531 system_pods.go:89] "kube-controller-manager-ha-735960-m02" [258fde18-ac5c-4446-842b-9465529b154c] Running
	I0701 12:27:05.729720  653531 system_pods.go:89] "kube-controller-manager-ha-735960-m03" [79acc56b-a9e0-4d4b-bc64-1a3a36ddf051] Running
	I0701 12:27:05.729729  653531 system_pods.go:89] "kube-proxy-25ssf" [11f0dc23-ab9d-4d39-988d-4c44dfde86cd] Running
	I0701 12:27:05.729737  653531 system_pods.go:89] "kube-proxy-776rt" [5666dac7-924e-4429-bd1d-a1a5647cc611] Running
	I0701 12:27:05.729745  653531 system_pods.go:89] "kube-proxy-b6knb" [eb36e930-5799-4ff7-821a-ccb22303cd1b] Running
	I0701 12:27:05.729755  653531 system_pods.go:89] "kube-proxy-lphzn" [0761a7a6-740e-4cde-9ab5-e02e8d417907] Running
	I0701 12:27:05.729764  653531 system_pods.go:89] "kube-scheduler-ha-735960" [c624cf42-a7d6-4aaf-859d-1aeaf29f9acb] Running
	I0701 12:27:05.729770  653531 system_pods.go:89] "kube-scheduler-ha-735960-m02" [7de78af7-2d79-46dc-bd34-f221d79fde06] Running
	I0701 12:27:05.729776  653531 system_pods.go:89] "kube-scheduler-ha-735960-m03" [9f9a2030-9332-44af-b8dc-3b4609e53f91] Running
	I0701 12:27:05.729783  653531 system_pods.go:89] "kube-vip-ha-735960" [4299679a-c145-4f4f-8ec6-3cd468b98ef1] Running
	I0701 12:27:05.729789  653531 system_pods.go:89] "kube-vip-ha-735960-m02" [1c9b13e1-515c-43c0-8d99-5ad1c1807727] Running
	I0701 12:27:05.729796  653531 system_pods.go:89] "kube-vip-ha-735960-m03" [7069ea7c-5461-4fe6-a969-97fe33396ebb] Running
	I0701 12:27:05.729802  653531 system_pods.go:89] "storage-provisioner" [f5c4f7f9-d648-4019-a5ea-6ce59f6c5663] Running
	I0701 12:27:05.729815  653531 system_pods.go:126] duration metric: took 210.095212ms to wait for k8s-apps to be running ...
	I0701 12:27:05.729829  653531 system_svc.go:44] waiting for kubelet service to be running ....
	I0701 12:27:05.729888  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 12:27:05.745646  653531 system_svc.go:56] duration metric: took 15.808828ms WaitForService to wait for kubelet
	I0701 12:27:05.745679  653531 kubeadm.go:576] duration metric: took 23.279640822s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 12:27:05.745702  653531 node_conditions.go:102] verifying NodePressure condition ...
	I0701 12:27:05.915161  653531 request.go:629] Waited for 169.354932ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes
	I0701 12:27:05.915221  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes
	I0701 12:27:05.915226  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:05.915234  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:05.915239  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:05.919105  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:05.920307  653531 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0701 12:27:05.920336  653531 node_conditions.go:123] node cpu capacity is 2
	I0701 12:27:05.920352  653531 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0701 12:27:05.920357  653531 node_conditions.go:123] node cpu capacity is 2
	I0701 12:27:05.920361  653531 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0701 12:27:05.920366  653531 node_conditions.go:123] node cpu capacity is 2
	I0701 12:27:05.920370  653531 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0701 12:27:05.920375  653531 node_conditions.go:123] node cpu capacity is 2
	I0701 12:27:05.920382  653531 node_conditions.go:105] duration metric: took 174.672945ms to run NodePressure ...
	I0701 12:27:05.920400  653531 start.go:240] waiting for startup goroutines ...
	I0701 12:27:05.920438  653531 start.go:254] writing updated cluster config ...
	I0701 12:27:05.922556  653531 out.go:177] 
	I0701 12:27:05.924320  653531 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:27:05.924444  653531 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
	I0701 12:27:05.926228  653531 out.go:177] * Starting "ha-735960-m04" worker node in "ha-735960" cluster
	I0701 12:27:05.927583  653531 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 12:27:05.927623  653531 cache.go:56] Caching tarball of preloaded images
	I0701 12:27:05.927740  653531 preload.go:173] Found /home/jenkins/minikube-integration/19166-630650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0701 12:27:05.927753  653531 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0701 12:27:05.927868  653531 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
	I0701 12:27:05.928081  653531 start.go:360] acquireMachinesLock for ha-735960-m04: {Name:mk43a6c0c0c15a237623df377ad65b82186c0ad8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:27:05.928138  653531 start.go:364] duration metric: took 34.293µs to acquireMachinesLock for "ha-735960-m04"
	I0701 12:27:05.928160  653531 start.go:96] Skipping create...Using existing machine configuration
	I0701 12:27:05.928170  653531 fix.go:54] fixHost starting: m04
	I0701 12:27:05.928452  653531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:27:05.928496  653531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:27:05.944734  653531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39337
	I0701 12:27:05.945306  653531 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:27:05.945856  653531 main.go:141] libmachine: Using API Version  1
	I0701 12:27:05.945878  653531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:27:05.946270  653531 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:27:05.946505  653531 main.go:141] libmachine: (ha-735960-m04) Calling .DriverName
	I0701 12:27:05.946718  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetState
	I0701 12:27:05.948900  653531 fix.go:112] recreateIfNeeded on ha-735960-m04: state=Stopped err=<nil>
	I0701 12:27:05.948936  653531 main.go:141] libmachine: (ha-735960-m04) Calling .DriverName
	W0701 12:27:05.949137  653531 fix.go:138] unexpected machine state, will restart: <nil>
	I0701 12:27:05.951007  653531 out.go:177] * Restarting existing kvm2 VM for "ha-735960-m04" ...
	I0701 12:27:05.952219  653531 main.go:141] libmachine: (ha-735960-m04) Calling .Start
	I0701 12:27:05.952428  653531 main.go:141] libmachine: (ha-735960-m04) Ensuring networks are active...
	I0701 12:27:05.953378  653531 main.go:141] libmachine: (ha-735960-m04) Ensuring network default is active
	I0701 12:27:05.953815  653531 main.go:141] libmachine: (ha-735960-m04) Ensuring network mk-ha-735960 is active
	I0701 12:27:05.954229  653531 main.go:141] libmachine: (ha-735960-m04) Getting domain xml...
	I0701 12:27:05.954857  653531 main.go:141] libmachine: (ha-735960-m04) Creating domain...
	I0701 12:27:07.274791  653531 main.go:141] libmachine: (ha-735960-m04) Waiting to get IP...
	I0701 12:27:07.275684  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:07.276224  653531 main.go:141] libmachine: (ha-735960-m04) DBG | unable to find current IP address of domain ha-735960-m04 in network mk-ha-735960
	I0701 12:27:07.276269  653531 main.go:141] libmachine: (ha-735960-m04) DBG | I0701 12:27:07.276176  654403 retry.go:31] will retry after 236.931472ms: waiting for machine to come up
	I0701 12:27:07.514910  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:07.515487  653531 main.go:141] libmachine: (ha-735960-m04) DBG | unable to find current IP address of domain ha-735960-m04 in network mk-ha-735960
	I0701 12:27:07.515520  653531 main.go:141] libmachine: (ha-735960-m04) DBG | I0701 12:27:07.515422  654403 retry.go:31] will retry after 376.766943ms: waiting for machine to come up
	I0701 12:27:07.894235  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:07.894716  653531 main.go:141] libmachine: (ha-735960-m04) DBG | unable to find current IP address of domain ha-735960-m04 in network mk-ha-735960
	I0701 12:27:07.894748  653531 main.go:141] libmachine: (ha-735960-m04) DBG | I0701 12:27:07.894658  654403 retry.go:31] will retry after 389.939732ms: waiting for machine to come up
	I0701 12:27:08.286528  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:08.287041  653531 main.go:141] libmachine: (ha-735960-m04) DBG | unable to find current IP address of domain ha-735960-m04 in network mk-ha-735960
	I0701 12:27:08.287066  653531 main.go:141] libmachine: (ha-735960-m04) DBG | I0701 12:27:08.286982  654403 retry.go:31] will retry after 542.184171ms: waiting for machine to come up
	I0701 12:27:08.831459  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:08.832024  653531 main.go:141] libmachine: (ha-735960-m04) DBG | unable to find current IP address of domain ha-735960-m04 in network mk-ha-735960
	I0701 12:27:08.832105  653531 main.go:141] libmachine: (ha-735960-m04) DBG | I0701 12:27:08.832069  654403 retry.go:31] will retry after 609.488369ms: waiting for machine to come up
	I0701 12:27:09.442798  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:09.443236  653531 main.go:141] libmachine: (ha-735960-m04) DBG | unable to find current IP address of domain ha-735960-m04 in network mk-ha-735960
	I0701 12:27:09.443272  653531 main.go:141] libmachine: (ha-735960-m04) DBG | I0701 12:27:09.443174  654403 retry.go:31] will retry after 777.604605ms: waiting for machine to come up
	I0701 12:27:10.221860  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:10.222317  653531 main.go:141] libmachine: (ha-735960-m04) DBG | unable to find current IP address of domain ha-735960-m04 in network mk-ha-735960
	I0701 12:27:10.222352  653531 main.go:141] libmachine: (ha-735960-m04) DBG | I0701 12:27:10.222242  654403 retry.go:31] will retry after 1.013463977s: waiting for machine to come up
	I0701 12:27:11.237171  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:11.237628  653531 main.go:141] libmachine: (ha-735960-m04) DBG | unable to find current IP address of domain ha-735960-m04 in network mk-ha-735960
	I0701 12:27:11.237658  653531 main.go:141] libmachine: (ha-735960-m04) DBG | I0701 12:27:11.237572  654403 retry.go:31] will retry after 1.368493369s: waiting for machine to come up
	I0701 12:27:12.607736  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:12.608308  653531 main.go:141] libmachine: (ha-735960-m04) DBG | unable to find current IP address of domain ha-735960-m04 in network mk-ha-735960
	I0701 12:27:12.608342  653531 main.go:141] libmachine: (ha-735960-m04) DBG | I0701 12:27:12.608254  654403 retry.go:31] will retry after 1.709127759s: waiting for machine to come up
	I0701 12:27:14.320033  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:14.320531  653531 main.go:141] libmachine: (ha-735960-m04) DBG | unable to find current IP address of domain ha-735960-m04 in network mk-ha-735960
	I0701 12:27:14.320565  653531 main.go:141] libmachine: (ha-735960-m04) DBG | I0701 12:27:14.320491  654403 retry.go:31] will retry after 2.145058749s: waiting for machine to come up
	I0701 12:27:16.466840  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:16.467246  653531 main.go:141] libmachine: (ha-735960-m04) DBG | unable to find current IP address of domain ha-735960-m04 in network mk-ha-735960
	I0701 12:27:16.467275  653531 main.go:141] libmachine: (ha-735960-m04) DBG | I0701 12:27:16.467196  654403 retry.go:31] will retry after 2.340416682s: waiting for machine to come up
	I0701 12:27:18.809756  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:18.810215  653531 main.go:141] libmachine: (ha-735960-m04) DBG | unable to find current IP address of domain ha-735960-m04 in network mk-ha-735960
	I0701 12:27:18.810245  653531 main.go:141] libmachine: (ha-735960-m04) DBG | I0701 12:27:18.810155  654403 retry.go:31] will retry after 2.893605535s: waiting for machine to come up
	I0701 12:27:21.705535  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:21.706011  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has current primary IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:21.706036  653531 main.go:141] libmachine: (ha-735960-m04) Found IP for machine: 192.168.39.60
	I0701 12:27:21.706050  653531 main.go:141] libmachine: (ha-735960-m04) Reserving static IP address...
	I0701 12:27:21.706638  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "ha-735960-m04", mac: "52:54:00:2d:8e:6d", ip: "192.168.39.60"} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:21.706671  653531 main.go:141] libmachine: (ha-735960-m04) Reserved static IP address: 192.168.39.60
	I0701 12:27:21.706689  653531 main.go:141] libmachine: (ha-735960-m04) DBG | skip adding static IP to network mk-ha-735960 - found existing host DHCP lease matching {name: "ha-735960-m04", mac: "52:54:00:2d:8e:6d", ip: "192.168.39.60"}
	I0701 12:27:21.706703  653531 main.go:141] libmachine: (ha-735960-m04) DBG | Getting to WaitForSSH function...
	I0701 12:27:21.706715  653531 main.go:141] libmachine: (ha-735960-m04) Waiting for SSH to be available...
	I0701 12:27:21.709236  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:21.709702  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:21.709729  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:21.709818  653531 main.go:141] libmachine: (ha-735960-m04) DBG | Using SSH client type: external
	I0701 12:27:21.709841  653531 main.go:141] libmachine: (ha-735960-m04) DBG | Using SSH private key: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m04/id_rsa (-rw-------)
	I0701 12:27:21.709870  653531 main.go:141] libmachine: (ha-735960-m04) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.60 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m04/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0701 12:27:21.709885  653531 main.go:141] libmachine: (ha-735960-m04) DBG | About to run SSH command:
	I0701 12:27:21.709897  653531 main.go:141] libmachine: (ha-735960-m04) DBG | exit 0
	I0701 12:27:21.838462  653531 main.go:141] libmachine: (ha-735960-m04) DBG | SSH cmd err, output: <nil>: 
	I0701 12:27:21.838803  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetConfigRaw
	I0701 12:27:21.839497  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetIP
	I0701 12:27:21.842255  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:21.842727  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:21.842764  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:21.843067  653531 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
	I0701 12:27:21.843309  653531 machine.go:94] provisionDockerMachine start ...
	I0701 12:27:21.843332  653531 main.go:141] libmachine: (ha-735960-m04) Calling .DriverName
	I0701 12:27:21.843625  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHHostname
	I0701 12:27:21.846158  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:21.846625  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:21.846658  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:21.846874  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHPort
	I0701 12:27:21.847122  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:21.847313  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:21.847496  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHUsername
	I0701 12:27:21.847763  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:27:21.847995  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0701 12:27:21.848012  653531 main.go:141] libmachine: About to run SSH command:
	hostname
	I0701 12:27:21.958527  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0701 12:27:21.958560  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetMachineName
	I0701 12:27:21.958896  653531 buildroot.go:166] provisioning hostname "ha-735960-m04"
	I0701 12:27:21.958928  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetMachineName
	I0701 12:27:21.959168  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHHostname
	I0701 12:27:21.961718  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:21.962176  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:21.962212  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:21.962410  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHPort
	I0701 12:27:21.962629  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:21.962804  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:21.962930  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHUsername
	I0701 12:27:21.963089  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:27:21.963293  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0701 12:27:21.963311  653531 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-735960-m04 && echo "ha-735960-m04" | sudo tee /etc/hostname
	I0701 12:27:22.089150  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-735960-m04
	
	I0701 12:27:22.089185  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHHostname
	I0701 12:27:22.092352  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:22.092805  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:22.092829  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:22.093059  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHPort
	I0701 12:27:22.093293  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:22.093532  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:22.093680  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHUsername
	I0701 12:27:22.093947  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:27:22.094124  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0701 12:27:22.094152  653531 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-735960-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-735960-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-735960-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0701 12:27:22.211873  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0701 12:27:22.211908  653531 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19166-630650/.minikube CaCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19166-630650/.minikube}
	I0701 12:27:22.211930  653531 buildroot.go:174] setting up certificates
	I0701 12:27:22.211938  653531 provision.go:84] configureAuth start
	I0701 12:27:22.211947  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetMachineName
	I0701 12:27:22.212269  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetIP
	I0701 12:27:22.215120  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:22.215523  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:22.215555  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:22.215810  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHHostname
	I0701 12:27:22.218161  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:22.218800  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:22.218836  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:22.219044  653531 provision.go:143] copyHostCerts
	I0701 12:27:22.219086  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem
	I0701 12:27:22.219130  653531 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem, removing ...
	I0701 12:27:22.219141  653531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem
	I0701 12:27:22.219226  653531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem (1078 bytes)
	I0701 12:27:22.219330  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem
	I0701 12:27:22.219356  653531 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem, removing ...
	I0701 12:27:22.219365  653531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem
	I0701 12:27:22.219402  653531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem (1123 bytes)
	I0701 12:27:22.219472  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem
	I0701 12:27:22.219497  653531 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem, removing ...
	I0701 12:27:22.219503  653531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem
	I0701 12:27:22.219534  653531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem (1675 bytes)
	I0701 12:27:22.219602  653531 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem org=jenkins.ha-735960-m04 san=[127.0.0.1 192.168.39.60 ha-735960-m04 localhost minikube]
	I0701 12:27:22.329827  653531 provision.go:177] copyRemoteCerts
	I0701 12:27:22.329892  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0701 12:27:22.329923  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHHostname
	I0701 12:27:22.332967  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:22.333373  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:22.333406  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:22.333651  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHPort
	I0701 12:27:22.333896  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:22.334062  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHUsername
	I0701 12:27:22.334281  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m04/id_rsa Username:docker}
	I0701 12:27:22.417286  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0701 12:27:22.417383  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0701 12:27:22.441229  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0701 12:27:22.441316  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0701 12:27:22.465192  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0701 12:27:22.465262  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0701 12:27:22.489482  653531 provision.go:87] duration metric: took 277.524425ms to configureAuth
	I0701 12:27:22.489525  653531 buildroot.go:189] setting minikube options for container-runtime
	I0701 12:27:22.489832  653531 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:27:22.489882  653531 main.go:141] libmachine: (ha-735960-m04) Calling .DriverName
	I0701 12:27:22.490191  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHHostname
	I0701 12:27:22.493387  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:22.493808  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:22.493842  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:22.494001  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHPort
	I0701 12:27:22.494272  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:22.494482  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:22.494666  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHUsername
	I0701 12:27:22.494871  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:27:22.495082  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0701 12:27:22.495096  653531 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0701 12:27:22.603693  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0701 12:27:22.603722  653531 buildroot.go:70] root file system type: tmpfs
	I0701 12:27:22.603868  653531 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0701 12:27:22.603921  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHHostname
	I0701 12:27:22.606932  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:22.607406  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:22.607441  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:22.607659  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHPort
	I0701 12:27:22.607881  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:22.608030  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:22.608161  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHUsername
	I0701 12:27:22.608332  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:27:22.608539  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0701 12:27:22.608607  653531 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.39.16"
	Environment="NO_PROXY=192.168.39.16,192.168.39.86"
	Environment="NO_PROXY=192.168.39.16,192.168.39.86,192.168.39.97"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0701 12:27:22.729176  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.39.16
	Environment=NO_PROXY=192.168.39.16,192.168.39.86
	Environment=NO_PROXY=192.168.39.16,192.168.39.86,192.168.39.97
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0701 12:27:22.729234  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHHostname
	I0701 12:27:22.732936  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:22.733425  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:22.733462  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:22.733653  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHPort
	I0701 12:27:22.733908  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:22.734181  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:22.734376  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHUsername
	I0701 12:27:22.734607  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:27:22.734842  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0701 12:27:22.734871  653531 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0701 12:27:24.534039  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0701 12:27:24.534075  653531 machine.go:97] duration metric: took 2.690748128s to provisionDockerMachine
	I0701 12:27:24.534091  653531 start.go:293] postStartSetup for "ha-735960-m04" (driver="kvm2")
	I0701 12:27:24.534104  653531 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0701 12:27:24.534123  653531 main.go:141] libmachine: (ha-735960-m04) Calling .DriverName
	I0701 12:27:24.534499  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0701 12:27:24.534541  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHHostname
	I0701 12:27:24.537254  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:24.537740  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:24.537779  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:24.537959  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHPort
	I0701 12:27:24.538181  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:24.538373  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHUsername
	I0701 12:27:24.538597  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m04/id_rsa Username:docker}
	I0701 12:27:24.622239  653531 ssh_runner.go:195] Run: cat /etc/os-release
	I0701 12:27:24.626566  653531 info.go:137] Remote host: Buildroot 2023.02.9
	I0701 12:27:24.626597  653531 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-630650/.minikube/addons for local assets ...
	I0701 12:27:24.626682  653531 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-630650/.minikube/files for local assets ...
	I0701 12:27:24.626776  653531 filesync.go:149] local asset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> 6378542.pem in /etc/ssl/certs
	I0701 12:27:24.626790  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> /etc/ssl/certs/6378542.pem
	I0701 12:27:24.626899  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0701 12:27:24.638615  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem --> /etc/ssl/certs/6378542.pem (1708 bytes)
	I0701 12:27:24.662568  653531 start.go:296] duration metric: took 128.459164ms for postStartSetup
	I0701 12:27:24.662618  653531 main.go:141] libmachine: (ha-735960-m04) Calling .DriverName
	I0701 12:27:24.663010  653531 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0701 12:27:24.663051  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHHostname
	I0701 12:27:24.665748  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:24.666087  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:24.666114  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:24.666265  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHPort
	I0701 12:27:24.666549  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:24.666727  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHUsername
	I0701 12:27:24.666943  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m04/id_rsa Username:docker}
	I0701 12:27:24.753987  653531 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0701 12:27:24.754081  653531 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0701 12:27:24.791910  653531 fix.go:56] duration metric: took 18.863722464s for fixHost
	I0701 12:27:24.791970  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHHostname
	I0701 12:27:24.795473  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:24.795824  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:24.795860  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:24.796063  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHPort
	I0701 12:27:24.796321  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:24.796518  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:24.796690  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHUsername
	I0701 12:27:24.796892  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:27:24.797130  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0701 12:27:24.797146  653531 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0701 12:27:24.911069  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719836844.884316737
	
	I0701 12:27:24.911100  653531 fix.go:216] guest clock: 1719836844.884316737
	I0701 12:27:24.911110  653531 fix.go:229] Guest: 2024-07-01 12:27:24.884316737 +0000 UTC Remote: 2024-07-01 12:27:24.791945819 +0000 UTC m=+202.261797488 (delta=92.370918ms)
	I0701 12:27:24.911131  653531 fix.go:200] guest clock delta is within tolerance: 92.370918ms
	I0701 12:27:24.911137  653531 start.go:83] releasing machines lock for "ha-735960-m04", held for 18.982986548s
	I0701 12:27:24.911163  653531 main.go:141] libmachine: (ha-735960-m04) Calling .DriverName
	I0701 12:27:24.911481  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetIP
	I0701 12:27:24.914298  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:24.914691  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:24.914721  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:24.917119  653531 out.go:177] * Found network options:
	I0701 12:27:24.918569  653531 out.go:177]   - NO_PROXY=192.168.39.16,192.168.39.86,192.168.39.97
	W0701 12:27:24.919961  653531 proxy.go:119] fail to check proxy env: Error ip not in block
	W0701 12:27:24.919987  653531 proxy.go:119] fail to check proxy env: Error ip not in block
	W0701 12:27:24.919997  653531 proxy.go:119] fail to check proxy env: Error ip not in block
	I0701 12:27:24.920012  653531 main.go:141] libmachine: (ha-735960-m04) Calling .DriverName
	I0701 12:27:24.920847  653531 main.go:141] libmachine: (ha-735960-m04) Calling .DriverName
	I0701 12:27:24.921063  653531 main.go:141] libmachine: (ha-735960-m04) Calling .DriverName
	I0701 12:27:24.921170  653531 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0701 12:27:24.921210  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHHostname
	W0701 12:27:24.921252  653531 proxy.go:119] fail to check proxy env: Error ip not in block
	W0701 12:27:24.921277  653531 proxy.go:119] fail to check proxy env: Error ip not in block
	W0701 12:27:24.921290  653531 proxy.go:119] fail to check proxy env: Error ip not in block
	I0701 12:27:24.921364  653531 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0701 12:27:24.921385  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHHostname
	I0701 12:27:24.924253  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:24.924561  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:24.924715  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:24.924742  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:24.924933  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHPort
	I0701 12:27:24.925058  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:24.925080  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:24.925110  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:24.925325  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHUsername
	I0701 12:27:24.925339  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHPort
	I0701 12:27:24.925519  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:24.925615  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m04/id_rsa Username:docker}
	I0701 12:27:24.925685  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHUsername
	I0701 12:27:24.925840  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m04/id_rsa Username:docker}
	W0701 12:27:25.004044  653531 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0701 12:27:25.004109  653531 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0701 12:27:25.029712  653531 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0701 12:27:25.029746  653531 start.go:494] detecting cgroup driver to use...
	I0701 12:27:25.029880  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 12:27:25.052034  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0701 12:27:25.062847  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0701 12:27:25.073005  653531 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0701 12:27:25.073080  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0701 12:27:25.083300  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0701 12:27:25.093834  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0701 12:27:25.104814  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0701 12:27:25.115006  653531 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0701 12:27:25.126080  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0701 12:27:25.136492  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0701 12:27:25.147986  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0701 12:27:25.158638  653531 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0701 12:27:25.168301  653531 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0701 12:27:25.177427  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:27:25.290645  653531 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0701 12:27:25.317946  653531 start.go:494] detecting cgroup driver to use...
	I0701 12:27:25.318090  653531 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0701 12:27:25.333522  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 12:27:25.349308  653531 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0701 12:27:25.366057  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 12:27:25.379554  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 12:27:25.393005  653531 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0701 12:27:25.427883  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 12:27:25.443710  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 12:27:25.462653  653531 ssh_runner.go:195] Run: which cri-dockerd
	I0701 12:27:25.466440  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0701 12:27:25.475817  653531 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0701 12:27:25.491900  653531 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0701 12:27:25.609810  653531 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0701 12:27:25.736607  653531 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0701 12:27:25.736666  653531 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0701 12:27:25.753218  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:27:25.872913  653531 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0701 12:27:28.274644  653531 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.401692528s)
	I0701 12:27:28.274730  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0701 12:27:28.288270  653531 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0701 12:27:28.306360  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0701 12:27:28.320063  653531 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0701 12:27:28.444909  653531 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0701 12:27:28.582500  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:27:28.708064  653531 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0701 12:27:28.728173  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0701 12:27:28.743660  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:27:28.873765  653531 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0701 12:27:28.960958  653531 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0701 12:27:28.961063  653531 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0701 12:27:28.967089  653531 start.go:562] Will wait 60s for crictl version
	I0701 12:27:28.967205  653531 ssh_runner.go:195] Run: which crictl
	I0701 12:27:28.971404  653531 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0701 12:27:29.011615  653531 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.1
	RuntimeApiVersion:  v1
	I0701 12:27:29.011699  653531 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0701 12:27:29.041339  653531 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0701 12:27:29.073461  653531 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.1 ...
	I0701 12:27:29.075110  653531 out.go:177]   - env NO_PROXY=192.168.39.16
	I0701 12:27:29.076621  653531 out.go:177]   - env NO_PROXY=192.168.39.16,192.168.39.86
	I0701 12:27:29.078186  653531 out.go:177]   - env NO_PROXY=192.168.39.16,192.168.39.86,192.168.39.97
	I0701 12:27:29.079949  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetIP
	I0701 12:27:29.083268  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:29.083683  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:29.083711  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:29.084018  653531 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0701 12:27:29.088562  653531 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 12:27:29.105010  653531 mustload.go:65] Loading cluster: ha-735960
	I0701 12:27:29.105303  653531 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:27:29.105654  653531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:27:29.105708  653531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:27:29.121628  653531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39389
	I0701 12:27:29.122222  653531 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:27:29.122816  653531 main.go:141] libmachine: Using API Version  1
	I0701 12:27:29.122844  653531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:27:29.123210  653531 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:27:29.123475  653531 main.go:141] libmachine: (ha-735960) Calling .GetState
	I0701 12:27:29.125364  653531 host.go:66] Checking if "ha-735960" exists ...
	I0701 12:27:29.125670  653531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:27:29.125708  653531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:27:29.141532  653531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36053
	I0701 12:27:29.142051  653531 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:27:29.142638  653531 main.go:141] libmachine: Using API Version  1
	I0701 12:27:29.142662  653531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:27:29.143010  653531 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:27:29.143254  653531 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:27:29.143488  653531 certs.go:68] Setting up /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960 for IP: 192.168.39.60
	I0701 12:27:29.143501  653531 certs.go:194] generating shared ca certs ...
	I0701 12:27:29.143518  653531 certs.go:226] acquiring lock for ca certs: {Name:mk34e166bfd069e523b2325e14d1812c523bff53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:27:29.143646  653531 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key
	I0701 12:27:29.143686  653531 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key
	I0701 12:27:29.143702  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0701 12:27:29.143722  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0701 12:27:29.143739  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0701 12:27:29.143757  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0701 12:27:29.143817  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem (1338 bytes)
	W0701 12:27:29.143851  653531 certs.go:480] ignoring /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854_empty.pem, impossibly tiny 0 bytes
	I0701 12:27:29.143871  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem (1675 bytes)
	I0701 12:27:29.143894  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem (1078 bytes)
	I0701 12:27:29.143916  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem (1123 bytes)
	I0701 12:27:29.143937  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem (1675 bytes)
	I0701 12:27:29.143972  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem (1708 bytes)
	I0701 12:27:29.144004  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:27:29.144021  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem -> /usr/share/ca-certificates/637854.pem
	I0701 12:27:29.144041  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> /usr/share/ca-certificates/6378542.pem
	I0701 12:27:29.144072  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0701 12:27:29.171419  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0701 12:27:29.196509  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0701 12:27:29.222599  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0701 12:27:29.248989  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0701 12:27:29.275034  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem --> /usr/share/ca-certificates/637854.pem (1338 bytes)
	I0701 12:27:29.300102  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem --> /usr/share/ca-certificates/6378542.pem (1708 bytes)
	I0701 12:27:29.327329  653531 ssh_runner.go:195] Run: openssl version
	I0701 12:27:29.333121  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0701 12:27:29.344555  653531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:27:29.349319  653531 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  1 12:05 /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:27:29.349394  653531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:27:29.355247  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0701 12:27:29.366285  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/637854.pem && ln -fs /usr/share/ca-certificates/637854.pem /etc/ssl/certs/637854.pem"
	I0701 12:27:29.376931  653531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/637854.pem
	I0701 12:27:29.381303  653531 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  1 12:11 /usr/share/ca-certificates/637854.pem
	I0701 12:27:29.381385  653531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/637854.pem
	I0701 12:27:29.387458  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/637854.pem /etc/ssl/certs/51391683.0"
	I0701 12:27:29.398343  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6378542.pem && ln -fs /usr/share/ca-certificates/6378542.pem /etc/ssl/certs/6378542.pem"
	I0701 12:27:29.409321  653531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6378542.pem
	I0701 12:27:29.414299  653531 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  1 12:11 /usr/share/ca-certificates/6378542.pem
	I0701 12:27:29.414400  653531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6378542.pem
	I0701 12:27:29.420975  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6378542.pem /etc/ssl/certs/3ec20f2e.0"
	I0701 12:27:29.434286  653531 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0701 12:27:29.438767  653531 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0701 12:27:29.438817  653531 kubeadm.go:928] updating node {m04 192.168.39.60 0 v1.30.2 docker false true} ...
	I0701 12:27:29.438918  653531 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-735960-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.60
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-735960 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0701 12:27:29.438988  653531 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0701 12:27:29.450811  653531 binaries.go:44] Found k8s binaries, skipping transfer
	I0701 12:27:29.450895  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0701 12:27:29.462511  653531 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0701 12:27:29.480246  653531 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0701 12:27:29.497624  653531 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0701 12:27:29.502554  653531 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 12:27:29.515005  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:27:29.648948  653531 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0701 12:27:29.668809  653531 start.go:234] Will wait 6m0s for node &{Name:m04 IP:192.168.39.60 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0701 12:27:29.669186  653531 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:27:29.671772  653531 out.go:177] * Verifying Kubernetes components...
	I0701 12:27:29.673288  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:27:29.823420  653531 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0701 12:27:29.839349  653531 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19166-630650/kubeconfig
	I0701 12:27:29.839675  653531 kapi.go:59] client config for ha-735960: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/client.crt", KeyFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/client.key", CAFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfbb80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0701 12:27:29.839746  653531 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.16:8443
	I0701 12:27:29.840001  653531 node_ready.go:35] waiting up to 6m0s for node "ha-735960-m04" to be "Ready" ...
	I0701 12:27:29.840108  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m04
	I0701 12:27:29.840118  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:29.840130  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:29.840138  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:29.843740  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:30.340654  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m04
	I0701 12:27:30.340679  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:30.340687  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:30.340691  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:30.344079  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:30.344547  653531 node_ready.go:49] node "ha-735960-m04" has status "Ready":"True"
	I0701 12:27:30.344570  653531 node_ready.go:38] duration metric: took 504.547887ms for node "ha-735960-m04" to be "Ready" ...
	I0701 12:27:30.344579  653531 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 12:27:30.344650  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods
	I0701 12:27:30.344660  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:30.344668  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:30.344675  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:30.351108  653531 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0701 12:27:30.358660  653531 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:30.358749  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:27:30.358758  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:30.358766  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:30.358771  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:30.362032  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:30.362784  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:27:30.362802  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:30.362812  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:30.362816  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:30.365450  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:27:30.365914  653531 pod_ready.go:92] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:30.365936  653531 pod_ready.go:81] duration metric: took 7.248792ms for pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:30.365949  653531 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-p4rtz" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:30.366016  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-p4rtz
	I0701 12:27:30.366025  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:30.366035  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:30.366043  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:30.368928  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:27:30.369820  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:27:30.369836  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:30.369843  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:30.369858  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:30.373004  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:30.373769  653531 pod_ready.go:92] pod "coredns-7db6d8ff4d-p4rtz" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:30.373785  653531 pod_ready.go:81] duration metric: took 7.830149ms for pod "coredns-7db6d8ff4d-p4rtz" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:30.373794  653531 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:30.373848  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960
	I0701 12:27:30.373856  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:30.373862  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:30.373867  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:30.376565  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:27:30.377340  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:27:30.377356  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:30.377363  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:30.377367  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:30.379523  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:27:30.379966  653531 pod_ready.go:92] pod "etcd-ha-735960" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:30.379982  653531 pod_ready.go:81] duration metric: took 6.178731ms for pod "etcd-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:30.379991  653531 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:30.380048  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m02
	I0701 12:27:30.380055  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:30.380062  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:30.380069  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:30.382485  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:27:30.383125  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:27:30.383141  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:30.383148  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:30.383155  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:30.385845  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:27:30.386599  653531 pod_ready.go:92] pod "etcd-ha-735960-m02" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:30.386616  653531 pod_ready.go:81] duration metric: took 6.619715ms for pod "etcd-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:30.386624  653531 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:30.541077  653531 request.go:629] Waited for 154.380092ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:27:30.541196  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:27:30.541207  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:30.541219  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:30.541229  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:30.544660  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:30.740754  653531 request.go:629] Waited for 195.337132ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:30.740847  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:30.740857  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:30.740865  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:30.740869  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:30.744492  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:30.745072  653531 pod_ready.go:92] pod "etcd-ha-735960-m03" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:30.745094  653531 pod_ready.go:81] duration metric: took 358.462325ms for pod "etcd-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:30.745123  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:30.941364  653531 request.go:629] Waited for 196.100673ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960
	I0701 12:27:30.941453  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960
	I0701 12:27:30.941466  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:30.941477  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:30.941487  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:30.946577  653531 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0701 12:27:31.140711  653531 request.go:629] Waited for 193.223112ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:27:31.140788  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:27:31.140793  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:31.140800  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:31.140804  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:31.146571  653531 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0701 12:27:31.147245  653531 pod_ready.go:92] pod "kube-apiserver-ha-735960" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:31.147269  653531 pod_ready.go:81] duration metric: took 402.135058ms for pod "kube-apiserver-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:31.147280  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:31.341367  653531 request.go:629] Waited for 193.988845ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m02
	I0701 12:27:31.341477  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m02
	I0701 12:27:31.341489  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:31.341500  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:31.341508  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:31.345561  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:27:31.540709  653531 request.go:629] Waited for 194.115472ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:27:31.540784  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:27:31.540789  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:31.540797  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:31.540800  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:31.544920  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:27:31.545652  653531 pod_ready.go:92] pod "kube-apiserver-ha-735960-m02" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:31.545679  653531 pod_ready.go:81] duration metric: took 398.391166ms for pod "kube-apiserver-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:31.545689  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:31.741170  653531 request.go:629] Waited for 195.369232ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:27:31.741243  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:27:31.741251  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:31.741261  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:31.741272  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:31.745382  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:27:31.941422  653531 request.go:629] Waited for 195.397431ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:31.941512  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:31.941517  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:31.941526  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:31.941531  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:31.945358  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:31.945947  653531 pod_ready.go:92] pod "kube-apiserver-ha-735960-m03" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:31.945971  653531 pod_ready.go:81] duration metric: took 400.276204ms for pod "kube-apiserver-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:31.945982  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:32.140926  653531 request.go:629] Waited for 194.860847ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-735960
	I0701 12:27:32.141014  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-735960
	I0701 12:27:32.141023  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:32.141048  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:32.141058  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:32.146741  653531 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0701 12:27:32.341040  653531 request.go:629] Waited for 193.334578ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:27:32.341112  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:27:32.341117  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:32.341126  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:32.341132  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:32.344664  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:32.345182  653531 pod_ready.go:92] pod "kube-controller-manager-ha-735960" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:32.345200  653531 pod_ready.go:81] duration metric: took 399.209545ms for pod "kube-controller-manager-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:32.345210  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:32.541314  653531 request.go:629] Waited for 196.016373ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-735960-m02
	I0701 12:27:32.541395  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-735960-m02
	I0701 12:27:32.541402  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:32.541414  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:32.541424  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:32.545663  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:27:32.741118  653531 request.go:629] Waited for 194.597088ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:27:32.741201  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:27:32.741209  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:32.741220  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:32.741228  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:32.745051  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:32.745612  653531 pod_ready.go:92] pod "kube-controller-manager-ha-735960-m02" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:32.745636  653531 pod_ready.go:81] duration metric: took 400.417224ms for pod "kube-controller-manager-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:32.745651  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:32.941594  653531 request.go:629] Waited for 195.859048ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-735960-m03
	I0701 12:27:32.941697  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-735960-m03
	I0701 12:27:32.941704  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:32.941712  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:32.941720  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:32.945661  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:33.140796  653531 request.go:629] Waited for 194.297237ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:33.140872  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:33.140881  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:33.140892  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:33.140902  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:33.148523  653531 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0701 12:27:33.149119  653531 pod_ready.go:92] pod "kube-controller-manager-ha-735960-m03" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:33.149229  653531 pod_ready.go:81] duration metric: took 403.561455ms for pod "kube-controller-manager-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:33.149274  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-25ssf" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:33.341103  653531 request.go:629] Waited for 191.712414ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-25ssf
	I0701 12:27:33.341203  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-25ssf
	I0701 12:27:33.341211  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:33.341222  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:33.341236  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:33.345005  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:33.541118  653531 request.go:629] Waited for 195.201433ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m04
	I0701 12:27:33.541195  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m04
	I0701 12:27:33.541202  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:33.541212  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:33.541220  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:33.544937  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:33.546208  653531 pod_ready.go:92] pod "kube-proxy-25ssf" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:33.546231  653531 pod_ready.go:81] duration metric: took 396.932438ms for pod "kube-proxy-25ssf" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:33.546244  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-776rt" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:33.741353  653531 request.go:629] Waited for 195.026851ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-776rt
	I0701 12:27:33.741456  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-776rt
	I0701 12:27:33.741466  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:33.741475  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:33.741481  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:33.745239  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:33.941300  653531 request.go:629] Waited for 195.397929ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:33.941381  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:33.941388  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:33.941399  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:33.941408  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:33.944917  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:33.945530  653531 pod_ready.go:92] pod "kube-proxy-776rt" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:33.945551  653531 pod_ready.go:81] duration metric: took 399.299813ms for pod "kube-proxy-776rt" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:33.945565  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b6knb" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:34.140984  653531 request.go:629] Waited for 195.324742ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b6knb
	I0701 12:27:34.141050  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b6knb
	I0701 12:27:34.141055  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:34.141063  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:34.141075  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:34.144882  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:34.341131  653531 request.go:629] Waited for 195.426765ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:27:34.341198  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:27:34.341203  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:34.341211  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:34.341215  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:34.344938  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:34.345533  653531 pod_ready.go:92] pod "kube-proxy-b6knb" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:34.345554  653531 pod_ready.go:81] duration metric: took 399.982623ms for pod "kube-proxy-b6knb" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:34.345563  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lphzn" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:34.540691  653531 request.go:629] Waited for 195.046851ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lphzn
	I0701 12:27:34.540777  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lphzn
	I0701 12:27:34.540782  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:34.540794  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:34.540798  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:34.544410  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:34.741782  653531 request.go:629] Waited for 196.474041ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:27:34.741851  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:27:34.741856  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:34.741864  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:34.741869  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:34.745447  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:34.746289  653531 pod_ready.go:92] pod "kube-proxy-lphzn" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:34.746312  653531 pod_ready.go:81] duration metric: took 400.742893ms for pod "kube-proxy-lphzn" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:34.746344  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:34.941411  653531 request.go:629] Waited for 194.97877ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960
	I0701 12:27:34.941489  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960
	I0701 12:27:34.941495  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:34.941502  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:34.941510  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:34.944984  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:35.141079  653531 request.go:629] Waited for 195.409668ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:27:35.141163  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:27:35.141168  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:35.141176  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:35.141194  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:35.144737  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:35.145431  653531 pod_ready.go:92] pod "kube-scheduler-ha-735960" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:35.145471  653531 pod_ready.go:81] duration metric: took 399.115782ms for pod "kube-scheduler-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:35.145485  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:35.341554  653531 request.go:629] Waited for 195.979537ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960-m02
	I0701 12:27:35.341639  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960-m02
	I0701 12:27:35.341650  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:35.341661  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:35.341672  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:35.345199  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:35.541252  653531 request.go:629] Waited for 195.403848ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:27:35.541340  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:27:35.541346  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:35.541354  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:35.541362  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:35.545398  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:27:35.546010  653531 pod_ready.go:92] pod "kube-scheduler-ha-735960-m02" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:35.546037  653531 pod_ready.go:81] duration metric: took 400.543297ms for pod "kube-scheduler-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:35.546051  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:35.741442  653531 request.go:629] Waited for 195.294004ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960-m03
	I0701 12:27:35.741533  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960-m03
	I0701 12:27:35.741541  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:35.741553  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:35.741565  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:35.744725  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:35.940687  653531 request.go:629] Waited for 195.284608ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:35.940760  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:35.940766  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:35.940776  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:35.940783  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:35.944482  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:35.945011  653531 pod_ready.go:92] pod "kube-scheduler-ha-735960-m03" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:35.945032  653531 pod_ready.go:81] duration metric: took 398.973476ms for pod "kube-scheduler-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:35.945048  653531 pod_ready.go:38] duration metric: took 5.600458409s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 12:27:35.945074  653531 system_svc.go:44] waiting for kubelet service to be running ....
	I0701 12:27:35.945143  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 12:27:35.962762  653531 system_svc.go:56] duration metric: took 17.680549ms WaitForService to wait for kubelet
	I0701 12:27:35.962795  653531 kubeadm.go:576] duration metric: took 6.293928606s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 12:27:35.962817  653531 node_conditions.go:102] verifying NodePressure condition ...
	I0701 12:27:36.141286  653531 request.go:629] Waited for 178.366419ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes
	I0701 12:27:36.141375  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes
	I0701 12:27:36.141382  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:36.141394  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:36.141404  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:36.145426  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:36.146951  653531 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0701 12:27:36.146977  653531 node_conditions.go:123] node cpu capacity is 2
	I0701 12:27:36.146989  653531 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0701 12:27:36.146992  653531 node_conditions.go:123] node cpu capacity is 2
	I0701 12:27:36.146996  653531 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0701 12:27:36.146999  653531 node_conditions.go:123] node cpu capacity is 2
	I0701 12:27:36.147001  653531 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0701 12:27:36.147004  653531 node_conditions.go:123] node cpu capacity is 2
	I0701 12:27:36.147009  653531 node_conditions.go:105] duration metric: took 184.187151ms to run NodePressure ...
	I0701 12:27:36.147024  653531 start.go:240] waiting for startup goroutines ...
	I0701 12:27:36.147054  653531 start.go:254] writing updated cluster config ...
	I0701 12:27:36.147403  653531 ssh_runner.go:195] Run: rm -f paused
	I0701 12:27:36.201170  653531 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0701 12:27:36.203376  653531 out.go:177] * Done! kubectl is now configured to use "ha-735960" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jul 01 12:25:13 ha-735960 cri-dockerd[1398]: time="2024-07-01T12:25:13Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Jul 01 12:25:23 ha-735960 dockerd[1125]: time="2024-07-01T12:25:23.366654170Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 01 12:25:23 ha-735960 dockerd[1125]: time="2024-07-01T12:25:23.366710385Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 01 12:25:23 ha-735960 dockerd[1125]: time="2024-07-01T12:25:23.366723641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 12:25:23 ha-735960 dockerd[1125]: time="2024-07-01T12:25:23.367696676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 12:25:23 ha-735960 dockerd[1125]: time="2024-07-01T12:25:23.388479723Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 01 12:25:23 ha-735960 dockerd[1125]: time="2024-07-01T12:25:23.388593936Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 01 12:25:23 ha-735960 dockerd[1125]: time="2024-07-01T12:25:23.389018347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 12:25:23 ha-735960 dockerd[1125]: time="2024-07-01T12:25:23.389381366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 12:25:23 ha-735960 dockerd[1125]: time="2024-07-01T12:25:23.390771396Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 01 12:25:23 ha-735960 dockerd[1125]: time="2024-07-01T12:25:23.391192786Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 01 12:25:23 ha-735960 dockerd[1125]: time="2024-07-01T12:25:23.391291548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 12:25:23 ha-735960 dockerd[1125]: time="2024-07-01T12:25:23.391685449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 12:25:24 ha-735960 dockerd[1125]: time="2024-07-01T12:25:24.321168284Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 01 12:25:24 ha-735960 dockerd[1125]: time="2024-07-01T12:25:24.321255362Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 01 12:25:24 ha-735960 dockerd[1125]: time="2024-07-01T12:25:24.321269990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 12:25:24 ha-735960 dockerd[1125]: time="2024-07-01T12:25:24.321347198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 12:25:25 ha-735960 dockerd[1125]: time="2024-07-01T12:25:25.309227018Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 01 12:25:25 ha-735960 dockerd[1125]: time="2024-07-01T12:25:25.309334545Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 01 12:25:25 ha-735960 dockerd[1125]: time="2024-07-01T12:25:25.309346230Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 12:25:25 ha-735960 dockerd[1125]: time="2024-07-01T12:25:25.309972461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 12:25:26 ha-735960 dockerd[1125]: time="2024-07-01T12:25:26.350220788Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 01 12:25:26 ha-735960 dockerd[1125]: time="2024-07-01T12:25:26.350306647Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 01 12:25:26 ha-735960 dockerd[1125]: time="2024-07-01T12:25:26.350329844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 12:25:26 ha-735960 dockerd[1125]: time="2024-07-01T12:25:26.350448560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	51a34f4432461       6e38f40d628db                                                                                         2 minutes ago       Running             storage-provisioner       1                   d2dc46de092d5       storage-provisioner
	bf788c37e0912       ac1c61439df46                                                                                         2 minutes ago       Running             kindnet-cni               1                   afbde11b8a740       kindnet-7f6hm
	8cdf2026ed072       8c811b4aec35f                                                                                         2 minutes ago       Running             busybox                   1                   7d907d7b28c98       busybox-fc5497c4f-pjfcw
	710f5c3a9f856       53c535741fb44                                                                                         2 minutes ago       Running             kube-proxy                1                   e49ff3fb80595       kube-proxy-lphzn
	61dc29970290b       cbb01a7bd410d                                                                                         2 minutes ago       Running             coredns                   1                   de1daec45ac89       coredns-7db6d8ff4d-p4rtz
	4a151786b08f5       cbb01a7bd410d                                                                                         2 minutes ago       Running             coredns                   1                   26981372e6136       coredns-7db6d8ff4d-nk4lf
	8ee3e44a43c3b       56ce0fd9fb532                                                                                         2 minutes ago       Running             kube-apiserver            5                   1b92afc0e4763       kube-apiserver-ha-735960
	67dc946c8c45c       e874818b3caac                                                                                         2 minutes ago       Running             kube-controller-manager   5                   3379ae4b4d689       kube-controller-manager-ha-735960
	1c046b029aa4a       38af8ddebf499                                                                                         3 minutes ago       Running             kube-vip                  1                   32c93b266a82d       kube-vip-ha-735960
	693eb0b8f5d78       7820c83aa1394                                                                                         3 minutes ago       Running             kube-scheduler            2                   ec2e5d106b539       kube-scheduler-ha-735960
	ec2c061093f10       e874818b3caac                                                                                         3 minutes ago       Exited              kube-controller-manager   4                   3379ae4b4d689       kube-controller-manager-ha-735960
	852492f61fee7       3861cfcd7c04c                                                                                         3 minutes ago       Running             etcd                      2                   c9044136ea747       etcd-ha-735960
	a3cb59ee8d572       56ce0fd9fb532                                                                                         3 minutes ago       Exited              kube-apiserver            4                   1b92afc0e4763       kube-apiserver-ha-735960
	cecb3dd12e16e       38af8ddebf499                                                                                         5 minutes ago       Exited              kube-vip                  0                   8d1562fb4b8c3       kube-vip-ha-735960
	6a200a6b49020       3861cfcd7c04c                                                                                         5 minutes ago       Exited              etcd                      1                   5b1097d48d724       etcd-ha-735960
	2d71437c5f06d       7820c83aa1394                                                                                         5 minutes ago       Exited              kube-scheduler            1                   fa7dea6a1b8bd       kube-scheduler-ha-735960
	1ef6d9da6a9c5       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   9 minutes ago       Exited              busybox                   0                   1f5ccc7b0e655       busybox-fc5497c4f-pjfcw
	a9c30cd4b3455       cbb01a7bd410d                                                                                         11 minutes ago      Exited              coredns                   0                   7b4b4f7ec4b63       coredns-7db6d8ff4d-nk4lf
	769b0b8751350       cbb01a7bd410d                                                                                         11 minutes ago      Exited              coredns                   0                   7a349370d4f88       coredns-7db6d8ff4d-p4rtz
	f472aef5302fd       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              11 minutes ago      Exited              kindnet-cni               0                   ab9c74a502295       kindnet-7f6hm
	6116abe6039dc       53c535741fb44                                                                                         11 minutes ago      Exited              kube-proxy                0                   da69191059798       kube-proxy-lphzn
	
	
	==> coredns [4a151786b08f] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:47509 - 49224 "HINFO IN 6979381009676685748.1822735874857968465. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.033568754s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[177456986]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (01-Jul-2024 12:25:23.743) (total time: 30001ms):
	Trace[177456986]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (12:25:53.744)
	Trace[177456986]: [30.001445665s] [30.001445665s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[947462717]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (01-Jul-2024 12:25:23.742) (total time: 30003ms):
	Trace[947462717]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (12:25:53.743)
	Trace[947462717]: [30.0032009s] [30.0032009s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[886534813]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (01-Jul-2024 12:25:23.742) (total time: 30004ms):
	Trace[886534813]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (12:25:53.745)
	Trace[886534813]: [30.004749172s] [30.004749172s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [61dc29970290] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:49574 - 32592 "HINFO IN 7534101530096432962.1842168600618500663. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017366932s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[2027452150]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (01-Jul-2024 12:25:23.742) (total time: 30003ms):
	Trace[2027452150]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (12:25:53.743)
	Trace[2027452150]: [30.003896779s] [30.003896779s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[222503702]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (01-Jul-2024 12:25:23.743) (total time: 30003ms):
	Trace[222503702]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (12:25:53.744)
	Trace[222503702]: [30.003901467s] [30.003901467s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1950728267]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (01-Jul-2024 12:25:23.742) (total time: 30005ms):
	Trace[1950728267]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (12:25:53.745)
	Trace[1950728267]: [30.005235099s] [30.005235099s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [769b0b875135] <==
	[INFO] 10.244.1.2:44221 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000082797s
	[INFO] 10.244.2.2:33797 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157729s
	[INFO] 10.244.2.2:52590 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004055351s
	[INFO] 10.244.2.2:46983 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003253494s
	[INFO] 10.244.2.2:56187 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000205215s
	[INFO] 10.244.2.2:41086 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000158307s
	[INFO] 10.244.0.4:47783 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000097077s
	[INFO] 10.244.0.4:50743 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001523s
	[INFO] 10.244.0.4:37141 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000138763s
	[INFO] 10.244.1.2:32981 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000132906s
	[INFO] 10.244.1.2:36762 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001646552s
	[INFO] 10.244.1.2:33583 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000072434s
	[INFO] 10.244.2.2:37027 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000156518s
	[INFO] 10.244.2.2:58435 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000104504s
	[INFO] 10.244.2.2:36107 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090251s
	[INFO] 10.244.0.4:44792 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000227164s
	[INFO] 10.244.0.4:56557 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000140925s
	[INFO] 10.244.1.2:38284 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000232717s
	[INFO] 10.244.2.2:37664 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135198s
	[INFO] 10.244.2.2:60876 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00032392s
	[INFO] 10.244.1.2:37461 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133264s
	[INFO] 10.244.1.2:45182 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000117372s
	[INFO] 10.244.1.2:37156 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000240093s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a9c30cd4b345] <==
	[INFO] 10.244.0.4:57095 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002251804s
	[INFO] 10.244.0.4:42381 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000081215s
	[INFO] 10.244.0.4:53499 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00124929s
	[INFO] 10.244.0.4:41287 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000174281s
	[INFO] 10.244.0.4:36433 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000142863s
	[INFO] 10.244.1.2:47688 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130034s
	[INFO] 10.244.1.2:40562 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00183587s
	[INFO] 10.244.1.2:35137 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000771s
	[INFO] 10.244.1.2:37798 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000184282s
	[INFO] 10.244.1.2:43876 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008807s
	[INFO] 10.244.2.2:35039 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119303s
	[INFO] 10.244.0.4:53229 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000090292s
	[INFO] 10.244.0.4:42097 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00011308s
	[INFO] 10.244.1.2:42114 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130767s
	[INFO] 10.244.1.2:56638 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000110707s
	[INFO] 10.244.1.2:55805 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000093484s
	[INFO] 10.244.2.2:51675 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000145117s
	[INFO] 10.244.2.2:56838 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000136843s
	[INFO] 10.244.0.4:60951 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000162889s
	[INFO] 10.244.0.4:34776 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000112367s
	[INFO] 10.244.0.4:45397 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000073771s
	[INFO] 10.244.0.4:52372 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000058127s
	[INFO] 10.244.1.2:41033 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000131962s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-735960
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-735960
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f87dc4b1adfa3faf30393d14b8f7fb9acc5e991c
	                    minikube.k8s.io/name=ha-735960
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_01T12_15_55_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Jul 2024 12:15:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-735960
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Jul 2024 12:27:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Jul 2024 12:25:13 +0000   Mon, 01 Jul 2024 12:15:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Jul 2024 12:25:13 +0000   Mon, 01 Jul 2024 12:15:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Jul 2024 12:25:13 +0000   Mon, 01 Jul 2024 12:15:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Jul 2024 12:25:13 +0000   Mon, 01 Jul 2024 12:16:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.16
	  Hostname:    ha-735960
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a500128d5645446baeea5654afbcb060
	  System UUID:                a500128d-5645-446b-aeea-5654afbcb060
	  Boot ID:                    a9ffe936-2356-415e-aa5e-ceedcf15ed72
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-pjfcw              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	  kube-system                 coredns-7db6d8ff4d-nk4lf             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     11m
	  kube-system                 coredns-7db6d8ff4d-p4rtz             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     11m
	  kube-system                 etcd-ha-735960                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-7f6hm                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-735960             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-735960    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-lphzn                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-735960             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-735960                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m29s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 11m                    kube-proxy       
	  Normal  Starting                 2m14s                  kube-proxy       
	  Normal  Starting                 11m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  11m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  11m                    kubelet          Node ha-735960 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m                    kubelet          Node ha-735960 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m                    kubelet          Node ha-735960 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           11m                    node-controller  Node ha-735960 event: Registered Node ha-735960 in Controller
	  Normal  NodeReady                11m                    kubelet          Node ha-735960 status is now: NodeReady
	  Normal  RegisteredNode           10m                    node-controller  Node ha-735960 event: Registered Node ha-735960 in Controller
	  Normal  RegisteredNode           9m12s                  node-controller  Node ha-735960 event: Registered Node ha-735960 in Controller
	  Normal  RegisteredNode           7m3s                   node-controller  Node ha-735960 event: Registered Node ha-735960 in Controller
	  Normal  NodeHasSufficientMemory  3m12s (x8 over 3m12s)  kubelet          Node ha-735960 status is now: NodeHasSufficientMemory
	  Normal  Starting                 3m12s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    3m12s (x8 over 3m12s)  kubelet          Node ha-735960 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m12s (x7 over 3m12s)  kubelet          Node ha-735960 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m25s                  node-controller  Node ha-735960 event: Registered Node ha-735960 in Controller
	  Normal  RegisteredNode           2m14s                  node-controller  Node ha-735960 event: Registered Node ha-735960 in Controller
	  Normal  RegisteredNode           38s                    node-controller  Node ha-735960 event: Registered Node ha-735960 in Controller
	
	
	Name:               ha-735960-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-735960-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f87dc4b1adfa3faf30393d14b8f7fb9acc5e991c
	                    minikube.k8s.io/name=ha-735960
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_01T12_17_00_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Jul 2024 12:16:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-735960-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Jul 2024 12:27:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Jul 2024 12:25:08 +0000   Mon, 01 Jul 2024 12:16:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Jul 2024 12:25:08 +0000   Mon, 01 Jul 2024 12:16:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Jul 2024 12:25:08 +0000   Mon, 01 Jul 2024 12:16:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Jul 2024 12:25:08 +0000   Mon, 01 Jul 2024 12:17:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.86
	  Hostname:    ha-735960-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 58cf4e4771994f2084a06f7d76199172
	  System UUID:                58cf4e47-7199-4f20-84a0-6f7d76199172
	  Boot ID:                    41c32de2-f03a-41e4-b332-91dc3dc2ccaf
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-twnb4                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	  kube-system                 etcd-ha-735960-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-bztzv                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-ha-735960-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-ha-735960-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-b6knb                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-ha-735960-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-vip-ha-735960-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m20s                  kube-proxy       
	  Normal   Starting                 7m16s                  kube-proxy       
	  Normal   Starting                 10m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node ha-735960-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node ha-735960-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)      kubelet          Node ha-735960-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                    node-controller  Node ha-735960-m02 event: Registered Node ha-735960-m02 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-735960-m02 event: Registered Node ha-735960-m02 in Controller
	  Normal   RegisteredNode           9m12s                  node-controller  Node ha-735960-m02 event: Registered Node ha-735960-m02 in Controller
	  Normal   Starting                 7m21s                  kubelet          Starting kubelet.
	  Warning  Rebooted                 7m21s                  kubelet          Node ha-735960-m02 has been rebooted, boot id: 64290a4a-a20d-436b-8567-0d3e8b822776
	  Normal   NodeHasSufficientPID     7m21s                  kubelet          Node ha-735960-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  7m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  7m21s                  kubelet          Node ha-735960-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m21s                  kubelet          Node ha-735960-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           7m3s                   node-controller  Node ha-735960-m02 event: Registered Node ha-735960-m02 in Controller
	  Normal   Starting                 2m48s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  2m48s (x8 over 2m48s)  kubelet          Node ha-735960-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m48s (x8 over 2m48s)  kubelet          Node ha-735960-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m48s (x7 over 2m48s)  kubelet          Node ha-735960-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  2m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           2m25s                  node-controller  Node ha-735960-m02 event: Registered Node ha-735960-m02 in Controller
	  Normal   RegisteredNode           2m14s                  node-controller  Node ha-735960-m02 event: Registered Node ha-735960-m02 in Controller
	  Normal   RegisteredNode           38s                    node-controller  Node ha-735960-m02 event: Registered Node ha-735960-m02 in Controller
	
	
	Name:               ha-735960-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-735960-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f87dc4b1adfa3faf30393d14b8f7fb9acc5e991c
	                    minikube.k8s.io/name=ha-735960
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_01T12_18_11_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Jul 2024 12:18:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-735960-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Jul 2024 12:27:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Jul 2024 12:26:42 +0000   Mon, 01 Jul 2024 12:26:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Jul 2024 12:26:42 +0000   Mon, 01 Jul 2024 12:26:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Jul 2024 12:26:42 +0000   Mon, 01 Jul 2024 12:26:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Jul 2024 12:26:42 +0000   Mon, 01 Jul 2024 12:26:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.97
	  Hostname:    ha-735960-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 995d5c3b59f847378d8e94e940e73ad6
	  System UUID:                995d5c3b-59f8-4737-8d8e-94e940e73ad6
	  Boot ID:                    bc7ccd53-413f-4b49-a89c-18c93eb90ad9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-cpsct                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	  kube-system                 etcd-ha-735960-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m29s
	  kube-system                 kindnet-2424m                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m31s
	  kube-system                 kube-apiserver-ha-735960-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m29s
	  kube-system                 kube-controller-manager-ha-735960-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m29s
	  kube-system                 kube-proxy-776rt                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m31s
	  kube-system                 kube-scheduler-ha-735960-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m29s
	  kube-system                 kube-vip-ha-735960-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 51s                    kube-proxy       
	  Normal   Starting                 9m26s                  kube-proxy       
	  Normal   NodeAllocatableEnforced  9m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  9m31s (x8 over 9m31s)  kubelet          Node ha-735960-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m31s (x8 over 9m31s)  kubelet          Node ha-735960-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m31s (x7 over 9m31s)  kubelet          Node ha-735960-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           9m28s                  node-controller  Node ha-735960-m03 event: Registered Node ha-735960-m03 in Controller
	  Normal   RegisteredNode           9m27s                  node-controller  Node ha-735960-m03 event: Registered Node ha-735960-m03 in Controller
	  Normal   RegisteredNode           9m12s                  node-controller  Node ha-735960-m03 event: Registered Node ha-735960-m03 in Controller
	  Normal   RegisteredNode           7m3s                   node-controller  Node ha-735960-m03 event: Registered Node ha-735960-m03 in Controller
	  Normal   RegisteredNode           2m25s                  node-controller  Node ha-735960-m03 event: Registered Node ha-735960-m03 in Controller
	  Normal   RegisteredNode           2m14s                  node-controller  Node ha-735960-m03 event: Registered Node ha-735960-m03 in Controller
	  Normal   NodeNotReady             105s                   node-controller  Node ha-735960-m03 status is now: NodeNotReady
	  Normal   Starting                 56s                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  56s                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  56s (x3 over 56s)      kubelet          Node ha-735960-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    56s (x3 over 56s)      kubelet          Node ha-735960-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     56s (x3 over 56s)      kubelet          Node ha-735960-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 56s (x2 over 56s)      kubelet          Node ha-735960-m03 has been rebooted, boot id: bc7ccd53-413f-4b49-a89c-18c93eb90ad9
	  Normal   NodeReady                56s (x2 over 56s)      kubelet          Node ha-735960-m03 status is now: NodeReady
	  Normal   RegisteredNode           38s                    node-controller  Node ha-735960-m03 event: Registered Node ha-735960-m03 in Controller
	
	
	Name:               ha-735960-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-735960-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f87dc4b1adfa3faf30393d14b8f7fb9acc5e991c
	                    minikube.k8s.io/name=ha-735960
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_01T12_19_10_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Jul 2024 12:19:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-735960-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Jul 2024 12:27:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Jul 2024 12:27:30 +0000   Mon, 01 Jul 2024 12:27:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Jul 2024 12:27:30 +0000   Mon, 01 Jul 2024 12:27:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Jul 2024 12:27:30 +0000   Mon, 01 Jul 2024 12:27:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Jul 2024 12:27:30 +0000   Mon, 01 Jul 2024 12:27:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.60
	  Hostname:    ha-735960-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fd9ce62e425d4b9a9ba9ce7045362f6f
	  System UUID:                fd9ce62e-425d-4b9a-9ba9-ce7045362f6f
	  Boot ID:                    ac395c38-b578-4b7c-8c31-9939ff570d11
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-6gx8s       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m29s
	  kube-system                 kube-proxy-25ssf    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 8m22s                  kube-proxy       
	  Normal   Starting                 6s                     kube-proxy       
	  Normal   NodeHasSufficientMemory  8m29s (x2 over 8m29s)  kubelet          Node ha-735960-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m29s (x2 over 8m29s)  kubelet          Node ha-735960-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m29s (x2 over 8m29s)  kubelet          Node ha-735960-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  8m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           8m28s                  node-controller  Node ha-735960-m04 event: Registered Node ha-735960-m04 in Controller
	  Normal   RegisteredNode           8m27s                  node-controller  Node ha-735960-m04 event: Registered Node ha-735960-m04 in Controller
	  Normal   RegisteredNode           8m27s                  node-controller  Node ha-735960-m04 event: Registered Node ha-735960-m04 in Controller
	  Normal   NodeReady                8m17s                  kubelet          Node ha-735960-m04 status is now: NodeReady
	  Normal   RegisteredNode           7m3s                   node-controller  Node ha-735960-m04 event: Registered Node ha-735960-m04 in Controller
	  Normal   RegisteredNode           2m25s                  node-controller  Node ha-735960-m04 event: Registered Node ha-735960-m04 in Controller
	  Normal   RegisteredNode           2m14s                  node-controller  Node ha-735960-m04 event: Registered Node ha-735960-m04 in Controller
	  Normal   NodeNotReady             105s                   node-controller  Node ha-735960-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           38s                    node-controller  Node ha-735960-m04 event: Registered Node ha-735960-m04 in Controller
	  Normal   Starting                 9s                     kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9s                     kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  8s (x2 over 8s)        kubelet          Node ha-735960-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x2 over 8s)        kubelet          Node ha-735960-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x2 over 8s)        kubelet          Node ha-735960-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 8s                     kubelet          Node ha-735960-m04 has been rebooted, boot id: ac395c38-b578-4b7c-8c31-9939ff570d11
	  Normal   NodeReady                8s                     kubelet          Node ha-735960-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050613] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036847] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.466422] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.742414] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.542503] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.890956] systemd-fstab-generator[479]: Ignoring "noauto" option for root device
	[  +0.054969] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.050473] systemd-fstab-generator[491]: Ignoring "noauto" option for root device
	[  +2.186564] systemd-fstab-generator[1047]: Ignoring "noauto" option for root device
	[  +0.281745] systemd-fstab-generator[1084]: Ignoring "noauto" option for root device
	[  +0.110826] systemd-fstab-generator[1096]: Ignoring "noauto" option for root device
	[  +0.123894] systemd-fstab-generator[1110]: Ignoring "noauto" option for root device
	[  +2.248144] kauditd_printk_skb: 195 callbacks suppressed
	[  +0.296890] systemd-fstab-generator[1351]: Ignoring "noauto" option for root device
	[  +0.110572] systemd-fstab-generator[1363]: Ignoring "noauto" option for root device
	[  +0.111234] systemd-fstab-generator[1375]: Ignoring "noauto" option for root device
	[  +0.128120] systemd-fstab-generator[1390]: Ignoring "noauto" option for root device
	[  +0.483978] systemd-fstab-generator[1543]: Ignoring "noauto" option for root device
	[  +6.839985] kauditd_printk_skb: 176 callbacks suppressed
	[ +10.416982] kauditd_printk_skb: 40 callbacks suppressed
	[Jul 1 12:25] kauditd_printk_skb: 30 callbacks suppressed
	[ +36.086285] kauditd_printk_skb: 48 callbacks suppressed
	
	
	==> etcd [6a200a6b4902] <==
	{"level":"info","ts":"2024-07-01T12:23:54.888482Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:54.888629Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:54.888657Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 received MsgPreVoteResp from b6c76b3131c1024 at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:54.888687Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 [logterm: 2, index: 1880] sent MsgPreVote request to 77557cf66c24e9ff at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:54.88881Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 [logterm: 2, index: 1880] sent MsgPreVote request to c77bbbee62c21090 at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:56.288805Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:56.288918Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:56.288952Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 received MsgPreVoteResp from b6c76b3131c1024 at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:56.289018Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 [logterm: 2, index: 1880] sent MsgPreVote request to 77557cf66c24e9ff at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:56.289055Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 [logterm: 2, index: 1880] sent MsgPreVote request to c77bbbee62c21090 at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:57.688608Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:57.688647Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:57.688658Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 received MsgPreVoteResp from b6c76b3131c1024 at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:57.688677Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 [logterm: 2, index: 1880] sent MsgPreVote request to 77557cf66c24e9ff at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:57.688686Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 [logterm: 2, index: 1880] sent MsgPreVote request to c77bbbee62c21090 at term 2"}
	{"level":"warn","ts":"2024-07-01T12:23:57.772826Z","caller":"etcdserver/server.go:2089","msg":"failed to publish local member to cluster through raft","local-member-id":"b6c76b3131c1024","local-member-attributes":"{Name:ha-735960 ClientURLs:[https://192.168.39.16:2379]}","request-path":"/0/members/b6c76b3131c1024/attributes","publish-timeout":"7s","error":"etcdserver: request timed out"}
	{"level":"info","ts":"2024-07-01T12:23:59.088585Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:59.088645Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:59.08866Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 received MsgPreVoteResp from b6c76b3131c1024 at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:59.088676Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 [logterm: 2, index: 1880] sent MsgPreVote request to 77557cf66c24e9ff at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:59.088691Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 [logterm: 2, index: 1880] sent MsgPreVote request to c77bbbee62c21090 at term 2"}
	{"level":"warn","ts":"2024-07-01T12:23:59.821067Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c77bbbee62c21090","rtt":"0s","error":"dial tcp 192.168.39.86:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-01T12:23:59.821149Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"c77bbbee62c21090","rtt":"0s","error":"dial tcp 192.168.39.86:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-01T12:23:59.836394Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"77557cf66c24e9ff","rtt":"0s","error":"dial tcp 192.168.39.97:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-07-01T12:23:59.837603Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"77557cf66c24e9ff","rtt":"0s","error":"dial tcp 192.168.39.97:2380: connect: no route to host"}
	
	
	==> etcd [852492f61fee] <==
	{"level":"warn","ts":"2024-07-01T12:26:26.327522Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.97:2380/version","remote-member-id":"77557cf66c24e9ff","error":"Get \"https://192.168.39.97:2380/version\": dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-01T12:26:26.327591Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"77557cf66c24e9ff","error":"Get \"https://192.168.39.97:2380/version\": dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-01T12:26:28.673762Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"77557cf66c24e9ff","rtt":"0s","error":"dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-01T12:26:28.673886Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"77557cf66c24e9ff","rtt":"0s","error":"dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-01T12:26:30.329643Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.97:2380/version","remote-member-id":"77557cf66c24e9ff","error":"Get \"https://192.168.39.97:2380/version\": dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-01T12:26:30.329708Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"77557cf66c24e9ff","error":"Get \"https://192.168.39.97:2380/version\": dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-01T12:26:33.674228Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"77557cf66c24e9ff","rtt":"0s","error":"dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-01T12:26:33.674291Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"77557cf66c24e9ff","rtt":"0s","error":"dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-01T12:26:34.331758Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.97:2380/version","remote-member-id":"77557cf66c24e9ff","error":"Get \"https://192.168.39.97:2380/version\": dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-01T12:26:34.331871Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"77557cf66c24e9ff","error":"Get \"https://192.168.39.97:2380/version\": dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-01T12:26:38.333902Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.97:2380/version","remote-member-id":"77557cf66c24e9ff","error":"Get \"https://192.168.39.97:2380/version\": dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-01T12:26:38.334199Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"77557cf66c24e9ff","error":"Get \"https://192.168.39.97:2380/version\": dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-01T12:26:38.674977Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"77557cf66c24e9ff","rtt":"0s","error":"dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-01T12:26:38.675107Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"77557cf66c24e9ff","rtt":"0s","error":"dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-01T12:26:42.336588Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.97:2380/version","remote-member-id":"77557cf66c24e9ff","error":"Get \"https://192.168.39.97:2380/version\": dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-01T12:26:42.336721Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"77557cf66c24e9ff","error":"Get \"https://192.168.39.97:2380/version\": dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-01T12:26:43.675872Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"77557cf66c24e9ff","rtt":"0s","error":"dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-01T12:26:43.675816Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"77557cf66c24e9ff","rtt":"0s","error":"dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"info","ts":"2024-07-01T12:26:44.691256Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"77557cf66c24e9ff"}
	{"level":"info","ts":"2024-07-01T12:26:44.707815Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b6c76b3131c1024","to":"77557cf66c24e9ff","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-01T12:26:44.707933Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"b6c76b3131c1024","remote-peer-id":"77557cf66c24e9ff"}
	{"level":"info","ts":"2024-07-01T12:26:44.734098Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"b6c76b3131c1024","remote-peer-id":"77557cf66c24e9ff"}
	{"level":"info","ts":"2024-07-01T12:26:44.734341Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b6c76b3131c1024","remote-peer-id":"77557cf66c24e9ff"}
	{"level":"info","ts":"2024-07-01T12:26:44.734943Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b6c76b3131c1024","to":"77557cf66c24e9ff","stream-type":"stream Message"}
	{"level":"info","ts":"2024-07-01T12:26:44.734997Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"b6c76b3131c1024","remote-peer-id":"77557cf66c24e9ff"}
	
	
	==> kernel <==
	 12:27:38 up 3 min,  0 users,  load average: 0.12, 0.16, 0.08
	Linux ha-735960 5.10.207 #1 SMP Wed Jun 26 19:37:34 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [bf788c37e091] <==
	I0701 12:27:06.456938       1 main.go:250] Node ha-735960-m04 has CIDR [10.244.3.0/24] 
	I0701 12:27:16.469806       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0701 12:27:16.469876       1 main.go:227] handling current node
	I0701 12:27:16.469887       1 main.go:223] Handling node with IPs: map[192.168.39.86:{}]
	I0701 12:27:16.469892       1 main.go:250] Node ha-735960-m02 has CIDR [10.244.1.0/24] 
	I0701 12:27:16.470093       1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
	I0701 12:27:16.470154       1 main.go:250] Node ha-735960-m03 has CIDR [10.244.2.0/24] 
	I0701 12:27:16.470277       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0701 12:27:16.470296       1 main.go:250] Node ha-735960-m04 has CIDR [10.244.3.0/24] 
	I0701 12:27:26.489056       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0701 12:27:26.489096       1 main.go:227] handling current node
	I0701 12:27:26.489107       1 main.go:223] Handling node with IPs: map[192.168.39.86:{}]
	I0701 12:27:26.489112       1 main.go:250] Node ha-735960-m02 has CIDR [10.244.1.0/24] 
	I0701 12:27:26.489365       1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
	I0701 12:27:26.489389       1 main.go:250] Node ha-735960-m03 has CIDR [10.244.2.0/24] 
	I0701 12:27:26.489445       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0701 12:27:26.489502       1 main.go:250] Node ha-735960-m04 has CIDR [10.244.3.0/24] 
	I0701 12:27:36.502509       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0701 12:27:36.502721       1 main.go:227] handling current node
	I0701 12:27:36.502867       1 main.go:223] Handling node with IPs: map[192.168.39.86:{}]
	I0701 12:27:36.502957       1 main.go:250] Node ha-735960-m02 has CIDR [10.244.1.0/24] 
	I0701 12:27:36.503231       1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
	I0701 12:27:36.503293       1 main.go:250] Node ha-735960-m03 has CIDR [10.244.2.0/24] 
	I0701 12:27:36.503421       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0701 12:27:36.503550       1 main.go:250] Node ha-735960-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [f472aef5302f] <==
	I0701 12:20:12.428842       1 main.go:250] Node ha-735960-m04 has CIDR [10.244.3.0/24] 
	I0701 12:20:22.443154       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0701 12:20:22.443292       1 main.go:227] handling current node
	I0701 12:20:22.443323       1 main.go:223] Handling node with IPs: map[192.168.39.86:{}]
	I0701 12:20:22.443388       1 main.go:250] Node ha-735960-m02 has CIDR [10.244.1.0/24] 
	I0701 12:20:22.443605       1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
	I0701 12:20:22.443653       1 main.go:250] Node ha-735960-m03 has CIDR [10.244.2.0/24] 
	I0701 12:20:22.443793       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0701 12:20:22.443836       1 main.go:250] Node ha-735960-m04 has CIDR [10.244.3.0/24] 
	I0701 12:20:32.451395       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0701 12:20:32.451431       1 main.go:227] handling current node
	I0701 12:20:32.451481       1 main.go:223] Handling node with IPs: map[192.168.39.86:{}]
	I0701 12:20:32.451486       1 main.go:250] Node ha-735960-m02 has CIDR [10.244.1.0/24] 
	I0701 12:20:32.451947       1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
	I0701 12:20:32.451980       1 main.go:250] Node ha-735960-m03 has CIDR [10.244.2.0/24] 
	I0701 12:20:32.452873       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0701 12:20:32.453015       1 main.go:250] Node ha-735960-m04 has CIDR [10.244.3.0/24] 
	I0701 12:20:42.470169       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0701 12:20:42.470264       1 main.go:227] handling current node
	I0701 12:20:42.470289       1 main.go:223] Handling node with IPs: map[192.168.39.86:{}]
	I0701 12:20:42.470302       1 main.go:250] Node ha-735960-m02 has CIDR [10.244.1.0/24] 
	I0701 12:20:42.470523       1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
	I0701 12:20:42.470616       1 main.go:250] Node ha-735960-m03 has CIDR [10.244.2.0/24] 
	I0701 12:20:42.470868       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0701 12:20:42.470914       1 main.go:250] Node ha-735960-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [8ee3e44a43c3] <==
	I0701 12:25:11.632913       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0701 12:25:11.645811       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0701 12:25:11.645876       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0701 12:25:11.690103       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0701 12:25:11.690292       1 policy_source.go:224] refreshing policies
	I0701 12:25:11.718179       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0701 12:25:11.726917       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0701 12:25:11.729879       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0701 12:25:11.730212       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0701 12:25:11.730238       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0701 12:25:11.737552       1 shared_informer.go:320] Caches are synced for configmaps
	I0701 12:25:11.751625       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0701 12:25:11.752269       1 aggregator.go:165] initial CRD sync complete...
	I0701 12:25:11.752312       1 autoregister_controller.go:141] Starting autoregister controller
	I0701 12:25:11.752319       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0701 12:25:11.752325       1 cache.go:39] Caches are synced for autoregister controller
	I0701 12:25:11.756015       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0701 12:25:11.757180       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0701 12:25:11.779526       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	W0701 12:25:11.807352       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.86]
	I0701 12:25:11.811699       1 controller.go:615] quota admission added evaluator for: endpoints
	I0701 12:25:11.839496       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0701 12:25:11.843047       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0701 12:25:12.631101       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0701 12:25:13.074615       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.16 192.168.39.86]
	
	
	==> kube-apiserver [a3cb59ee8d57] <==
	I0701 12:24:33.660467       1 options.go:221] external host was not specified, using 192.168.39.16
	I0701 12:24:33.670142       1 server.go:148] Version: v1.30.2
	I0701 12:24:33.670491       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0701 12:24:34.296638       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0701 12:24:34.308879       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0701 12:24:34.324179       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0701 12:24:34.324219       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0701 12:24:34.326894       1 instance.go:299] Using reconciler: lease
	W0701 12:24:54.288105       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0701 12:24:54.289911       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0701 12:24:54.328399       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [67dc946c8c45] <==
	I0701 12:25:24.689462       1 shared_informer.go:320] Caches are synced for deployment
	I0701 12:25:24.698997       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0701 12:25:24.699584       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="194.691µs"
	I0701 12:25:24.699894       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="568.701µs"
	I0701 12:25:24.704343       1 shared_informer.go:320] Caches are synced for resource quota
	I0701 12:25:24.710493       1 shared_informer.go:320] Caches are synced for stateful set
	I0701 12:25:24.741914       1 shared_informer.go:320] Caches are synced for resource quota
	I0701 12:25:24.771129       1 shared_informer.go:320] Caches are synced for disruption
	I0701 12:25:24.825005       1 shared_informer.go:320] Caches are synced for persistent volume
	I0701 12:25:25.061636       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.968119ms"
	I0701 12:25:25.061928       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.671µs"
	I0701 12:25:25.231337       1 shared_informer.go:320] Caches are synced for garbage collector
	I0701 12:25:25.278015       1 shared_informer.go:320] Caches are synced for garbage collector
	I0701 12:25:25.278079       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0701 12:25:53.073870       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-735960-m04"
	I0701 12:25:53.162214       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.543735ms"
	I0701 12:25:53.163381       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="162.337µs"
	I0701 12:25:59.557437       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="56.6658ms"
	I0701 12:25:59.558362       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="80.196µs"
	I0701 12:25:59.565576       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-s49dr EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-s49dr\": the object has been modified; please apply your changes to the latest version and try again"
	I0701 12:25:59.566070       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"673ce502-ab01-47a0-ad3e-c33bd402b496", APIVersion:"v1", ResourceVersion:"234", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-s49dr EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-s49dr": the object has been modified; please apply your changes to the latest version and try again
	I0701 12:26:43.750974       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="174.579µs"
	I0701 12:26:47.044231       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.968469ms"
	I0701 12:26:47.047107       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="77.336µs"
	I0701 12:27:30.083176       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-735960-m04"
	
	
	==> kube-controller-manager [ec2c061093f1] <==
	I0701 12:24:33.938262       1 serving.go:380] Generated self-signed cert in-memory
	I0701 12:24:34.667463       1 controllermanager.go:189] "Starting" version="v1.30.2"
	I0701 12:24:34.667501       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0701 12:24:34.670076       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0701 12:24:34.670322       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0701 12:24:34.670888       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0701 12:24:34.671075       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	E0701 12:24:55.336106       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.16:8443/healthz\": dial tcp 192.168.39.16:8443: connect: connection refused"
	
	
	==> kube-proxy [6116abe6039d] <==
	I0701 12:16:09.205590       1 server_linux.go:69] "Using iptables proxy"
	I0701 12:16:09.223098       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.16"]
	I0701 12:16:09.284088       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0701 12:16:09.284134       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0701 12:16:09.284152       1 server_linux.go:165] "Using iptables Proxier"
	I0701 12:16:09.286802       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0701 12:16:09.287240       1 server.go:872] "Version info" version="v1.30.2"
	I0701 12:16:09.287274       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0701 12:16:09.288803       1 config.go:192] "Starting service config controller"
	I0701 12:16:09.288830       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0701 12:16:09.289262       1 config.go:101] "Starting endpoint slice config controller"
	I0701 12:16:09.289283       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0701 12:16:09.290101       1 config.go:319] "Starting node config controller"
	I0701 12:16:09.290125       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0701 12:16:09.389941       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0701 12:16:09.390030       1 shared_informer.go:320] Caches are synced for service config
	I0701 12:16:09.390393       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [710f5c3a9f85] <==
	I0701 12:25:23.858069       1 server_linux.go:69] "Using iptables proxy"
	I0701 12:25:23.875125       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.16"]
	I0701 12:25:23.958416       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0701 12:25:23.958505       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0701 12:25:23.958526       1 server_linux.go:165] "Using iptables Proxier"
	I0701 12:25:23.963079       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0701 12:25:23.963683       1 server.go:872] "Version info" version="v1.30.2"
	I0701 12:25:23.963707       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0701 12:25:23.967807       1 config.go:192] "Starting service config controller"
	I0701 12:25:23.968544       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0701 12:25:23.968625       1 config.go:101] "Starting endpoint slice config controller"
	I0701 12:25:23.968632       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0701 12:25:23.972994       1 config.go:319] "Starting node config controller"
	I0701 12:25:23.973007       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0701 12:25:24.069380       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0701 12:25:24.069565       1 shared_informer.go:320] Caches are synced for service config
	I0701 12:25:24.073577       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2d71437c5f06] <==
	Trace[1766396451]: [10.001227292s] [10.001227292s] END
	E0701 12:23:38.923742       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.16:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	W0701 12:23:40.712171       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.16:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	E0701 12:23:40.712228       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.16:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	W0701 12:23:40.847258       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.16:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:35008->192.168.39.16:8443: read: connection reset by peer
	I0701 12:23:40.847402       1 trace.go:236] Trace[2065780204]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (01-Jul-2024 12:23:30.463) (total time: 10384ms):
	Trace[2065780204]: ---"Objects listed" error:Get "https://192.168.39.16:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:35008->192.168.39.16:8443: read: connection reset by peer 10384ms (12:23:40.847)
	Trace[2065780204]: [10.384136255s] [10.384136255s] END
	E0701 12:23:40.847432       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.16:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:35008->192.168.39.16:8443: read: connection reset by peer
	W0701 12:23:40.847437       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.16:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:35050->192.168.39.16:8443: read: connection reset by peer
	W0701 12:23:40.847259       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.16:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:35028->192.168.39.16:8443: read: connection reset by peer
	E0701 12:23:40.847495       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.16:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:35050->192.168.39.16:8443: read: connection reset by peer
	E0701 12:23:40.847499       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.16:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:35028->192.168.39.16:8443: read: connection reset by peer
	W0701 12:23:40.847682       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.16:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:35066->192.168.39.16:8443: read: connection reset by peer
	E0701 12:23:40.847714       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.16:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:35066->192.168.39.16:8443: read: connection reset by peer
	W0701 12:23:40.848299       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.16:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:35034->192.168.39.16:8443: read: connection reset by peer
	E0701 12:23:40.848357       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.16:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:35034->192.168.39.16:8443: read: connection reset by peer
	W0701 12:23:51.660283       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.16:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	E0701 12:23:51.660337       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.16:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	W0701 12:23:54.252191       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.16:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	E0701 12:23:54.252565       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.16:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	W0701 12:23:55.679907       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.16:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	E0701 12:23:55.680228       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.16:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	E0701 12:24:00.290141       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	E0701 12:24:00.290379       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [693eb0b8f5d7] <==
	W0701 12:25:03.325651       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.16:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	E0701 12:25:03.325717       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.16:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	W0701 12:25:03.469418       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.16:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	E0701 12:25:03.469554       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.16:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	W0701 12:25:03.474242       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.16:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	E0701 12:25:03.474348       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.16:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	W0701 12:25:03.575486       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.16:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	E0701 12:25:03.575608       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.16:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	W0701 12:25:03.691679       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.16:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	E0701 12:25:03.691809       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.16:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	W0701 12:25:05.461372       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.16:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	E0701 12:25:05.461485       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.16:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	W0701 12:25:05.563752       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.16:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	E0701 12:25:05.563793       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.16:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	W0701 12:25:05.636901       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.16:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	E0701 12:25:05.637119       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.16:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	W0701 12:25:11.653758       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0701 12:25:11.654470       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0701 12:25:11.654763       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0701 12:25:11.655634       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0701 12:25:11.655894       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0701 12:25:11.655933       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0701 12:25:11.659133       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0701 12:25:11.659348       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0701 12:25:13.850760       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 01 12:25:13 ha-735960 kubelet[1550]: I0701 12:25:13.105581    1550 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 01 12:25:13 ha-735960 kubelet[1550]: I0701 12:25:13.106791    1550 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 01 12:25:23 ha-735960 kubelet[1550]: I0701 12:25:23.225133    1550 scope.go:117] "RemoveContainer" containerID="769b0b8751350714b3d616a4cb2d06e20a1b7a96e8ac3e8f21b0d653f581e5f0"
	Jul 01 12:25:23 ha-735960 kubelet[1550]: I0701 12:25:23.225251    1550 scope.go:117] "RemoveContainer" containerID="a9c30cd4b3455401ac572f5a7fb2b84cb27956207b4804f80b909a2ccb4c394f"
	Jul 01 12:25:23 ha-735960 kubelet[1550]: I0701 12:25:23.226499    1550 scope.go:117] "RemoveContainer" containerID="6116abe6039dc6c324dce464fa4d85597bcc3455523d4a06be4293c343a9f8f9"
	Jul 01 12:25:24 ha-735960 kubelet[1550]: I0701 12:25:24.225255    1550 scope.go:117] "RemoveContainer" containerID="1ef6d9da6a9c5d6e77ef8d42735bdba288502d231394d299243bc1b669822d1c"
	Jul 01 12:25:25 ha-735960 kubelet[1550]: I0701 12:25:25.225212    1550 scope.go:117] "RemoveContainer" containerID="f472aef5302fd01233da1bd769162654c0b238cb1a3b0c9b24deef221c4821a3"
	Jul 01 12:25:26 ha-735960 kubelet[1550]: I0701 12:25:26.229286    1550 scope.go:117] "RemoveContainer" containerID="97d58c94f3fdcc84b84c3c46e6b25f8e7da118d5c9cd53058ae127fe580a40a7"
	Jul 01 12:25:26 ha-735960 kubelet[1550]: E0701 12:25:26.319340    1550 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 01 12:25:26 ha-735960 kubelet[1550]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 01 12:25:26 ha-735960 kubelet[1550]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 01 12:25:26 ha-735960 kubelet[1550]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 01 12:25:26 ha-735960 kubelet[1550]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 01 12:25:26 ha-735960 kubelet[1550]: I0701 12:25:26.443283    1550 scope.go:117] "RemoveContainer" containerID="14112a4d8f2cb5cfea8813c52de120eeef6fe681ebf589fd8708d1557c35b85f"
	Jul 01 12:25:26 ha-735960 kubelet[1550]: I0701 12:25:26.480472    1550 scope.go:117] "RemoveContainer" containerID="97d58c94f3fdcc84b84c3c46e6b25f8e7da118d5c9cd53058ae127fe580a40a7"
	Jul 01 12:26:26 ha-735960 kubelet[1550]: E0701 12:26:26.244909    1550 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 01 12:26:26 ha-735960 kubelet[1550]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 01 12:26:26 ha-735960 kubelet[1550]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 01 12:26:26 ha-735960 kubelet[1550]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 01 12:26:26 ha-735960 kubelet[1550]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 01 12:27:26 ha-735960 kubelet[1550]: E0701 12:27:26.245316    1550 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 01 12:27:26 ha-735960 kubelet[1550]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 01 12:27:26 ha-735960 kubelet[1550]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 01 12:27:26 ha-735960 kubelet[1550]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 01 12:27:26 ha-735960 kubelet[1550]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-735960 -n ha-735960
helpers_test.go:261: (dbg) Run:  kubectl --context ha-735960 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (217.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (3.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:413: expected profile "ha-735960" in json of 'profile list' to have "Degraded" status but have "HAppy" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-735960\",\"Status\":\"HAppy\",\"Config\":{\"Name\":\"ha-735960\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"
APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-735960\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.16\",\"Port\":8443,\"Kuberne
tesVersion\":\"v1.30.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.86\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.97\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.60\",\"Port\":0,\"KubernetesVersion\":\"v1.30.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\
":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\
"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-735960 -n ha-735960
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-735960 logs -n 25: (1.588257531s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterClusterRestart logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-735960 cp ha-735960-m03:/home/docker/cp-test.txt                              | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m04:/home/docker/cp-test_ha-735960-m03_ha-735960-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n                                                                 | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n ha-735960-m04 sudo cat                                          | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | /home/docker/cp-test_ha-735960-m03_ha-735960-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-735960 cp testdata/cp-test.txt                                                | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n                                                                 | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-735960 cp ha-735960-m04:/home/docker/cp-test.txt                              | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2826819896/001/cp-test_ha-735960-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n                                                                 | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-735960 cp ha-735960-m04:/home/docker/cp-test.txt                              | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960:/home/docker/cp-test_ha-735960-m04_ha-735960.txt                       |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n                                                                 | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n ha-735960 sudo cat                                              | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | /home/docker/cp-test_ha-735960-m04_ha-735960.txt                                 |           |         |         |                     |                     |
	| cp      | ha-735960 cp ha-735960-m04:/home/docker/cp-test.txt                              | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m02:/home/docker/cp-test_ha-735960-m04_ha-735960-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n                                                                 | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n ha-735960-m02 sudo cat                                          | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | /home/docker/cp-test_ha-735960-m04_ha-735960-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-735960 cp ha-735960-m04:/home/docker/cp-test.txt                              | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m03:/home/docker/cp-test_ha-735960-m04_ha-735960-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n                                                                 | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n ha-735960-m03 sudo cat                                          | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | /home/docker/cp-test_ha-735960-m04_ha-735960-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-735960 node stop m02 -v=7                                                     | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-735960 node start m02 -v=7                                                    | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:20 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-735960 -v=7                                                           | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:20 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-735960 -v=7                                                                | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:20 UTC | 01 Jul 24 12:21 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-735960 --wait=true -v=7                                                    | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:21 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-735960                                                                | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:22 UTC |                     |
	| node    | ha-735960 node delete m03 -v=7                                                   | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:22 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-735960 stop -v=7                                                              | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:23 UTC | 01 Jul 24 12:24 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-735960 --wait=true                                                         | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:24 UTC | 01 Jul 24 12:27 UTC |
	|         | -v=7 --alsologtostderr                                                           |           |         |         |                     |                     |
	|         | --driver=kvm2                                                                    |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/01 12:24:02
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0701 12:24:02.565321  653531 out.go:291] Setting OutFile to fd 1 ...
	I0701 12:24:02.565576  653531 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 12:24:02.565584  653531 out.go:304] Setting ErrFile to fd 2...
	I0701 12:24:02.565588  653531 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 12:24:02.565782  653531 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19166-630650/.minikube/bin
	I0701 12:24:02.566304  653531 out.go:298] Setting JSON to false
	I0701 12:24:02.567248  653531 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":7581,"bootTime":1719829062,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0701 12:24:02.567318  653531 start.go:139] virtualization: kvm guest
	I0701 12:24:02.569903  653531 out.go:177] * [ha-735960] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0701 12:24:02.571307  653531 notify.go:220] Checking for updates...
	I0701 12:24:02.571336  653531 out.go:177]   - MINIKUBE_LOCATION=19166
	I0701 12:24:02.572748  653531 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 12:24:02.574111  653531 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19166-630650/kubeconfig
	I0701 12:24:02.575333  653531 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19166-630650/.minikube
	I0701 12:24:02.576670  653531 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0701 12:24:02.578040  653531 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 12:24:02.579691  653531 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:24:02.580063  653531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:24:02.580118  653531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:24:02.595084  653531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46077
	I0701 12:24:02.595523  653531 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:24:02.596065  653531 main.go:141] libmachine: Using API Version  1
	I0701 12:24:02.596090  653531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:24:02.596376  653531 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:24:02.596591  653531 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:24:02.596798  653531 driver.go:392] Setting default libvirt URI to qemu:///system
	I0701 12:24:02.597091  653531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:24:02.597140  653531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:24:02.611685  653531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43891
	I0701 12:24:02.612062  653531 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:24:02.612574  653531 main.go:141] libmachine: Using API Version  1
	I0701 12:24:02.612596  653531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:24:02.612886  653531 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:24:02.613060  653531 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:24:02.647232  653531 out.go:177] * Using the kvm2 driver based on existing profile
	I0701 12:24:02.648606  653531 start.go:297] selected driver: kvm2
	I0701 12:24:02.648624  653531 start.go:901] validating driver "kvm2" against &{Name:ha-735960 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.2 ClusterName:ha-735960 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.60 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storagecla
ss:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 12:24:02.648774  653531 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 12:24:02.649109  653531 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:24:02.649176  653531 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19166-630650/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0701 12:24:02.663726  653531 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0701 12:24:02.664362  653531 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 12:24:02.664394  653531 cni.go:84] Creating CNI manager for ""
	I0701 12:24:02.664400  653531 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0701 12:24:02.664456  653531 start.go:340] cluster config:
	{Name:ha-735960 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-735960 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.60 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false h
elm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 12:24:02.664569  653531 iso.go:125] acquiring lock: {Name:mk5c70910f61bc270c83609c48670eaf9d7e0602 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:24:02.666644  653531 out.go:177] * Starting "ha-735960" primary control-plane node in "ha-735960" cluster
	I0701 12:24:02.667913  653531 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 12:24:02.667956  653531 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19166-630650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0701 12:24:02.667963  653531 cache.go:56] Caching tarball of preloaded images
	I0701 12:24:02.668051  653531 preload.go:173] Found /home/jenkins/minikube-integration/19166-630650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0701 12:24:02.668065  653531 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0701 12:24:02.668178  653531 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
	I0701 12:24:02.668362  653531 start.go:360] acquireMachinesLock for ha-735960: {Name:mk43a6c0c0c15a237623df377ad65b82186c0ad8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:24:02.668420  653531 start.go:364] duration metric: took 37.459µs to acquireMachinesLock for "ha-735960"
	I0701 12:24:02.668440  653531 start.go:96] Skipping create...Using existing machine configuration
	I0701 12:24:02.668454  653531 fix.go:54] fixHost starting: 
	I0701 12:24:02.668711  653531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:24:02.668747  653531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:24:02.682861  653531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39713
	I0701 12:24:02.683321  653531 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:24:02.683791  653531 main.go:141] libmachine: Using API Version  1
	I0701 12:24:02.683812  653531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:24:02.684145  653531 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:24:02.684389  653531 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:24:02.684573  653531 main.go:141] libmachine: (ha-735960) Calling .GetState
	I0701 12:24:02.686019  653531 fix.go:112] recreateIfNeeded on ha-735960: state=Stopped err=<nil>
	I0701 12:24:02.686043  653531 main.go:141] libmachine: (ha-735960) Calling .DriverName
	W0701 12:24:02.686187  653531 fix.go:138] unexpected machine state, will restart: <nil>
	I0701 12:24:02.688339  653531 out.go:177] * Restarting existing kvm2 VM for "ha-735960" ...
	I0701 12:24:02.690004  653531 main.go:141] libmachine: (ha-735960) Calling .Start
	I0701 12:24:02.690210  653531 main.go:141] libmachine: (ha-735960) Ensuring networks are active...
	I0701 12:24:02.690928  653531 main.go:141] libmachine: (ha-735960) Ensuring network default is active
	I0701 12:24:02.691237  653531 main.go:141] libmachine: (ha-735960) Ensuring network mk-ha-735960 is active
	I0701 12:24:02.691618  653531 main.go:141] libmachine: (ha-735960) Getting domain xml...
	I0701 12:24:02.692321  653531 main.go:141] libmachine: (ha-735960) Creating domain...
	I0701 12:24:03.888996  653531 main.go:141] libmachine: (ha-735960) Waiting to get IP...
	I0701 12:24:03.889967  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:03.890480  653531 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:24:03.890588  653531 main.go:141] libmachine: (ha-735960) DBG | I0701 12:24:03.890454  653582 retry.go:31] will retry after 276.532377ms: waiting for machine to come up
	I0701 12:24:04.169193  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:04.169696  653531 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:24:04.169722  653531 main.go:141] libmachine: (ha-735960) DBG | I0701 12:24:04.169655  653582 retry.go:31] will retry after 379.701447ms: waiting for machine to come up
	I0701 12:24:04.551325  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:04.551741  653531 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:24:04.551768  653531 main.go:141] libmachine: (ha-735960) DBG | I0701 12:24:04.551690  653582 retry.go:31] will retry after 390.796114ms: waiting for machine to come up
	I0701 12:24:04.944503  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:04.944879  653531 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:24:04.944907  653531 main.go:141] libmachine: (ha-735960) DBG | I0701 12:24:04.944824  653582 retry.go:31] will retry after 501.242083ms: waiting for machine to come up
	I0701 12:24:05.447754  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:05.448283  653531 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:24:05.448315  653531 main.go:141] libmachine: (ha-735960) DBG | I0701 12:24:05.448261  653582 retry.go:31] will retry after 739.761709ms: waiting for machine to come up
	I0701 12:24:06.189145  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:06.189602  653531 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:24:06.189631  653531 main.go:141] libmachine: (ha-735960) DBG | I0701 12:24:06.189545  653582 retry.go:31] will retry after 652.97975ms: waiting for machine to come up
	I0701 12:24:06.844427  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:06.844894  653531 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:24:06.844917  653531 main.go:141] libmachine: (ha-735960) DBG | I0701 12:24:06.844845  653582 retry.go:31] will retry after 1.122975762s: waiting for machine to come up
	I0701 12:24:07.969893  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:07.970374  653531 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:24:07.970427  653531 main.go:141] libmachine: (ha-735960) DBG | I0701 12:24:07.970304  653582 retry.go:31] will retry after 933.604302ms: waiting for machine to come up
	I0701 12:24:08.905636  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:08.905959  653531 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:24:08.905983  653531 main.go:141] libmachine: (ha-735960) DBG | I0701 12:24:08.905909  653582 retry.go:31] will retry after 1.753153445s: waiting for machine to come up
	I0701 12:24:10.662098  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:10.662553  653531 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:24:10.662622  653531 main.go:141] libmachine: (ha-735960) DBG | I0701 12:24:10.662537  653582 retry.go:31] will retry after 1.625060377s: waiting for machine to come up
	I0701 12:24:12.290368  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:12.290788  653531 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:24:12.290822  653531 main.go:141] libmachine: (ha-735960) DBG | I0701 12:24:12.290695  653582 retry.go:31] will retry after 2.741972388s: waiting for machine to come up
	I0701 12:24:15.036161  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:15.036634  653531 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:24:15.036661  653531 main.go:141] libmachine: (ha-735960) DBG | I0701 12:24:15.036581  653582 retry.go:31] will retry after 3.113034425s: waiting for machine to come up
	I0701 12:24:18.151534  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:18.152048  653531 main.go:141] libmachine: (ha-735960) Found IP for machine: 192.168.39.16
	I0701 12:24:18.152074  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has current primary IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:18.152083  653531 main.go:141] libmachine: (ha-735960) Reserving static IP address...
	I0701 12:24:18.152579  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "ha-735960", mac: "52:54:00:6c:20:7c", ip: "192.168.39.16"} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:18.152611  653531 main.go:141] libmachine: (ha-735960) DBG | skip adding static IP to network mk-ha-735960 - found existing host DHCP lease matching {name: "ha-735960", mac: "52:54:00:6c:20:7c", ip: "192.168.39.16"}
	I0701 12:24:18.152626  653531 main.go:141] libmachine: (ha-735960) Reserved static IP address: 192.168.39.16
	I0701 12:24:18.152643  653531 main.go:141] libmachine: (ha-735960) Waiting for SSH to be available...
	I0701 12:24:18.152674  653531 main.go:141] libmachine: (ha-735960) DBG | Getting to WaitForSSH function...
	I0701 12:24:18.154511  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:18.154741  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:18.154760  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:18.154885  653531 main.go:141] libmachine: (ha-735960) DBG | Using SSH client type: external
	I0701 12:24:18.154912  653531 main.go:141] libmachine: (ha-735960) DBG | Using SSH private key: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa (-rw-------)
	I0701 12:24:18.154954  653531 main.go:141] libmachine: (ha-735960) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.16 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0701 12:24:18.154968  653531 main.go:141] libmachine: (ha-735960) DBG | About to run SSH command:
	I0701 12:24:18.154991  653531 main.go:141] libmachine: (ha-735960) DBG | exit 0
	I0701 12:24:18.274220  653531 main.go:141] libmachine: (ha-735960) DBG | SSH cmd err, output: <nil>: 
	I0701 12:24:18.274677  653531 main.go:141] libmachine: (ha-735960) Calling .GetConfigRaw
	I0701 12:24:18.275344  653531 main.go:141] libmachine: (ha-735960) Calling .GetIP
	I0701 12:24:18.277628  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:18.278085  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:18.278118  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:18.278447  653531 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
	I0701 12:24:18.278671  653531 machine.go:94] provisionDockerMachine start ...
	I0701 12:24:18.278694  653531 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:24:18.278956  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:24:18.281138  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:18.281565  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:18.281590  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:18.281697  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:24:18.281884  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:18.282084  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:18.282290  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:24:18.282484  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:24:18.282777  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0701 12:24:18.282790  653531 main.go:141] libmachine: About to run SSH command:
	hostname
	I0701 12:24:18.378249  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0701 12:24:18.378279  653531 main.go:141] libmachine: (ha-735960) Calling .GetMachineName
	I0701 12:24:18.378583  653531 buildroot.go:166] provisioning hostname "ha-735960"
	I0701 12:24:18.378614  653531 main.go:141] libmachine: (ha-735960) Calling .GetMachineName
	I0701 12:24:18.378869  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:24:18.381421  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:18.381789  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:18.381817  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:18.381949  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:24:18.382158  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:18.382297  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:18.382445  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:24:18.382576  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:24:18.382763  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0701 12:24:18.382780  653531 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-735960 && echo "ha-735960" | sudo tee /etc/hostname
	I0701 12:24:18.491369  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-735960
	
	I0701 12:24:18.491396  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:24:18.494039  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:18.494432  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:18.494460  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:18.494718  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:24:18.494939  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:18.495106  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:18.495259  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:24:18.495452  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:24:18.495675  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0701 12:24:18.495699  653531 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-735960' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-735960/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-735960' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0701 12:24:18.598595  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0701 12:24:18.598631  653531 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19166-630650/.minikube CaCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19166-630650/.minikube}
	I0701 12:24:18.598653  653531 buildroot.go:174] setting up certificates
	I0701 12:24:18.598662  653531 provision.go:84] configureAuth start
	I0701 12:24:18.598670  653531 main.go:141] libmachine: (ha-735960) Calling .GetMachineName
	I0701 12:24:18.598968  653531 main.go:141] libmachine: (ha-735960) Calling .GetIP
	I0701 12:24:18.601563  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:18.602005  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:18.602036  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:18.602215  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:24:18.604739  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:18.605246  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:18.605273  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:18.605427  653531 provision.go:143] copyHostCerts
	I0701 12:24:18.605458  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem
	I0701 12:24:18.605515  653531 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem, removing ...
	I0701 12:24:18.605523  653531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem
	I0701 12:24:18.605588  653531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem (1675 bytes)
	I0701 12:24:18.605671  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem
	I0701 12:24:18.605688  653531 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem, removing ...
	I0701 12:24:18.605695  653531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem
	I0701 12:24:18.605718  653531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem (1078 bytes)
	I0701 12:24:18.605772  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem
	I0701 12:24:18.605788  653531 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem, removing ...
	I0701 12:24:18.605794  653531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem
	I0701 12:24:18.605814  653531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem (1123 bytes)
	I0701 12:24:18.605871  653531 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem org=jenkins.ha-735960 san=[127.0.0.1 192.168.39.16 ha-735960 localhost minikube]
	I0701 12:24:19.079576  653531 provision.go:177] copyRemoteCerts
	I0701 12:24:19.079661  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0701 12:24:19.079696  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:24:19.082253  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:19.082610  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:19.082638  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:19.082786  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:24:19.082996  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:19.083179  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:24:19.083325  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa Username:docker}
	I0701 12:24:19.160543  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0701 12:24:19.160634  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0701 12:24:19.183871  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0701 12:24:19.183957  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0701 12:24:19.206811  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0701 12:24:19.206911  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0701 12:24:19.229160  653531 provision.go:87] duration metric: took 630.48062ms to configureAuth
	I0701 12:24:19.229197  653531 buildroot.go:189] setting minikube options for container-runtime
	I0701 12:24:19.229480  653531 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:24:19.229521  653531 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:24:19.229827  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:24:19.232595  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:19.233032  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:19.233062  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:19.233264  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:24:19.233514  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:19.233696  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:19.233834  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:24:19.234025  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:24:19.234222  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0701 12:24:19.234237  653531 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0701 12:24:19.331417  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0701 12:24:19.331446  653531 buildroot.go:70] root file system type: tmpfs
	I0701 12:24:19.331582  653531 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0701 12:24:19.331605  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:24:19.334269  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:19.334634  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:19.334660  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:19.334900  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:24:19.335107  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:19.335308  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:19.335479  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:24:19.335645  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:24:19.335809  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0701 12:24:19.335865  653531 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0701 12:24:19.443562  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0701 12:24:19.443592  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:24:19.446176  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:19.446524  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:19.446556  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:19.446723  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:24:19.446930  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:19.447105  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:19.447245  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:24:19.447408  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:24:19.447591  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0701 12:24:19.447611  653531 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0701 12:24:21.232310  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0701 12:24:21.232343  653531 machine.go:97] duration metric: took 2.953656212s to provisionDockerMachine
	I0701 12:24:21.232359  653531 start.go:293] postStartSetup for "ha-735960" (driver="kvm2")
	I0701 12:24:21.232371  653531 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0701 12:24:21.232390  653531 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:24:21.232744  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0701 12:24:21.232777  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:24:21.235119  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:21.235559  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:21.235584  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:21.235772  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:24:21.235940  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:21.236122  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:24:21.236248  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa Username:docker}
	I0701 12:24:21.313134  653531 ssh_runner.go:195] Run: cat /etc/os-release
	I0701 12:24:21.317084  653531 info.go:137] Remote host: Buildroot 2023.02.9
	I0701 12:24:21.317118  653531 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-630650/.minikube/addons for local assets ...
	I0701 12:24:21.317202  653531 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-630650/.minikube/files for local assets ...
	I0701 12:24:21.317295  653531 filesync.go:149] local asset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> 6378542.pem in /etc/ssl/certs
	I0701 12:24:21.317307  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> /etc/ssl/certs/6378542.pem
	I0701 12:24:21.317399  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0701 12:24:21.326681  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem --> /etc/ssl/certs/6378542.pem (1708 bytes)
	I0701 12:24:21.349306  653531 start.go:296] duration metric: took 116.926386ms for postStartSetup
	I0701 12:24:21.349360  653531 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:24:21.349703  653531 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0701 12:24:21.349739  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:24:21.352499  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:21.352917  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:21.352946  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:21.353148  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:24:21.353394  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:21.353561  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:24:21.353790  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa Username:docker}
	I0701 12:24:21.433784  653531 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0701 12:24:21.433859  653531 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0701 12:24:21.475659  653531 fix.go:56] duration metric: took 18.807194904s for fixHost
	I0701 12:24:21.475706  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:24:21.478623  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:21.479038  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:21.479071  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:21.479250  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:24:21.479467  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:21.479584  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:21.479702  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:24:21.479838  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:24:21.480034  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0701 12:24:21.480048  653531 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0701 12:24:21.586741  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719836661.563256683
	
	I0701 12:24:21.586770  653531 fix.go:216] guest clock: 1719836661.563256683
	I0701 12:24:21.586783  653531 fix.go:229] Guest: 2024-07-01 12:24:21.563256683 +0000 UTC Remote: 2024-07-01 12:24:21.475685785 +0000 UTC m=+18.945537438 (delta=87.570898ms)
	I0701 12:24:21.586836  653531 fix.go:200] guest clock delta is within tolerance: 87.570898ms
	I0701 12:24:21.586844  653531 start.go:83] releasing machines lock for "ha-735960", held for 18.918411663s
	I0701 12:24:21.586868  653531 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:24:21.587158  653531 main.go:141] libmachine: (ha-735960) Calling .GetIP
	I0701 12:24:21.589666  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:21.590034  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:21.590064  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:21.590216  653531 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:24:21.590761  653531 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:24:21.590954  653531 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:24:21.591048  653531 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0701 12:24:21.591096  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:24:21.591207  653531 ssh_runner.go:195] Run: cat /version.json
	I0701 12:24:21.591235  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:24:21.593711  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:21.593857  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:21.594066  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:21.594091  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:21.594278  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:24:21.594408  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:21.594432  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:21.594491  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:21.594596  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:24:21.594674  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:24:21.594780  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:21.594865  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa Username:docker}
	I0701 12:24:21.594903  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:24:21.595018  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa Username:docker}
	I0701 12:24:21.688196  653531 ssh_runner.go:195] Run: systemctl --version
	I0701 12:24:21.693743  653531 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0701 12:24:21.698823  653531 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0701 12:24:21.698901  653531 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0701 12:24:21.714364  653531 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0701 12:24:21.714404  653531 start.go:494] detecting cgroup driver to use...
	I0701 12:24:21.714572  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 12:24:21.734692  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0701 12:24:21.744599  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0701 12:24:21.754591  653531 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0701 12:24:21.754664  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0701 12:24:21.764718  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0701 12:24:21.774564  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0701 12:24:21.784516  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0701 12:24:21.794592  653531 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0701 12:24:21.804646  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0701 12:24:21.814497  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0701 12:24:21.824363  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0701 12:24:21.834566  653531 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0701 12:24:21.843852  653531 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0701 12:24:21.852939  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:24:21.959107  653531 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0701 12:24:21.981473  653531 start.go:494] detecting cgroup driver to use...
	I0701 12:24:21.981556  653531 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0701 12:24:21.995383  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 12:24:22.009843  653531 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0701 12:24:22.030755  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 12:24:22.043208  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 12:24:22.055774  653531 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0701 12:24:22.080888  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 12:24:22.093331  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 12:24:22.110088  653531 ssh_runner.go:195] Run: which cri-dockerd
	I0701 12:24:22.113487  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0701 12:24:22.121907  653531 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0701 12:24:22.137227  653531 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0701 12:24:22.245438  653531 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0701 12:24:22.351994  653531 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0701 12:24:22.352150  653531 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0701 12:24:22.368109  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:24:22.474388  653531 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0701 12:24:24.887396  653531 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.412956412s)
	I0701 12:24:24.887487  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0701 12:24:24.900113  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0701 12:24:24.912702  653531 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0701 12:24:25.020545  653531 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0701 12:24:25.134056  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:24:25.242294  653531 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0701 12:24:25.258251  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0701 12:24:25.270762  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:24:25.375199  653531 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0701 12:24:25.454939  653531 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0701 12:24:25.455020  653531 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0701 12:24:25.460209  653531 start.go:562] Will wait 60s for crictl version
	I0701 12:24:25.460266  653531 ssh_runner.go:195] Run: which crictl
	I0701 12:24:25.463721  653531 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0701 12:24:25.498358  653531 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.1
	RuntimeApiVersion:  v1
	I0701 12:24:25.498453  653531 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0701 12:24:25.525766  653531 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0701 12:24:25.549708  653531 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.1 ...
	I0701 12:24:25.549757  653531 main.go:141] libmachine: (ha-735960) Calling .GetIP
	I0701 12:24:25.552699  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:25.553097  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:25.553132  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:25.553374  653531 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0701 12:24:25.557331  653531 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 12:24:25.569653  653531 kubeadm.go:877] updating cluster {Name:ha-735960 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cl
usterName:ha-735960 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.60 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:fa
lse freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0701 12:24:25.569810  653531 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 12:24:25.569866  653531 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0701 12:24:25.593428  653531 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	kindest/kindnetd:v20240513-cd2ac642
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0701 12:24:25.593450  653531 docker.go:615] Images already preloaded, skipping extraction
	I0701 12:24:25.593535  653531 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0701 12:24:25.613507  653531 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	kindest/kindnetd:v20240513-cd2ac642
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0701 12:24:25.613542  653531 cache_images.go:84] Images are preloaded, skipping loading
	I0701 12:24:25.613557  653531 kubeadm.go:928] updating node { 192.168.39.16 8443 v1.30.2 docker true true} ...
	I0701 12:24:25.613677  653531 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-735960 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.16
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-735960 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0701 12:24:25.613736  653531 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0701 12:24:25.636959  653531 cni.go:84] Creating CNI manager for ""
	I0701 12:24:25.636987  653531 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0701 12:24:25.637001  653531 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0701 12:24:25.637033  653531 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.16 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-735960 NodeName:ha-735960 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.16"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.16 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0701 12:24:25.637207  653531 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.16
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-735960"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.16
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.16"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0701 12:24:25.637234  653531 kube-vip.go:115] generating kube-vip config ...
	I0701 12:24:25.637291  653531 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0701 12:24:25.651059  653531 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0701 12:24:25.651192  653531 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0701 12:24:25.651261  653531 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0701 12:24:25.660952  653531 binaries.go:44] Found k8s binaries, skipping transfer
	I0701 12:24:25.661049  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0701 12:24:25.669901  653531 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0701 12:24:25.685801  653531 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0701 12:24:25.701259  653531 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0701 12:24:25.717237  653531 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0701 12:24:25.732682  653531 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0701 12:24:25.736549  653531 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 12:24:25.748348  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:24:25.857797  653531 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0701 12:24:25.874307  653531 certs.go:68] Setting up /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960 for IP: 192.168.39.16
	I0701 12:24:25.874340  653531 certs.go:194] generating shared ca certs ...
	I0701 12:24:25.874365  653531 certs.go:226] acquiring lock for ca certs: {Name:mk34e166bfd069e523b2325e14d1812c523bff53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:24:25.874584  653531 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key
	I0701 12:24:25.874645  653531 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key
	I0701 12:24:25.874659  653531 certs.go:256] generating profile certs ...
	I0701 12:24:25.874733  653531 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/client.key
	I0701 12:24:25.874814  653531 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key.5c21f4af
	I0701 12:24:25.874868  653531 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.key
	I0701 12:24:25.874883  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0701 12:24:25.874918  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0701 12:24:25.874937  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0701 12:24:25.874955  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0701 12:24:25.874972  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0701 12:24:25.874991  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0701 12:24:25.875008  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0701 12:24:25.875025  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0701 12:24:25.875093  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem (1338 bytes)
	W0701 12:24:25.875146  653531 certs.go:480] ignoring /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854_empty.pem, impossibly tiny 0 bytes
	I0701 12:24:25.875161  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem (1675 bytes)
	I0701 12:24:25.875193  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem (1078 bytes)
	I0701 12:24:25.875224  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem (1123 bytes)
	I0701 12:24:25.875261  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem (1675 bytes)
	I0701 12:24:25.875343  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem (1708 bytes)
	I0701 12:24:25.875386  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> /usr/share/ca-certificates/6378542.pem
	I0701 12:24:25.875409  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:24:25.875426  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem -> /usr/share/ca-certificates/637854.pem
	I0701 12:24:25.876083  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0701 12:24:25.910761  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0701 12:24:25.938480  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0701 12:24:25.963281  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0701 12:24:25.989413  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0701 12:24:26.015055  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0701 12:24:26.039406  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0701 12:24:26.062955  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0701 12:24:26.093960  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem --> /usr/share/ca-certificates/6378542.pem (1708 bytes)
	I0701 12:24:26.125896  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0701 12:24:26.156031  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem --> /usr/share/ca-certificates/637854.pem (1338 bytes)
	I0701 12:24:26.181375  653531 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0701 12:24:26.209470  653531 ssh_runner.go:195] Run: openssl version
	I0701 12:24:26.218386  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6378542.pem && ln -fs /usr/share/ca-certificates/6378542.pem /etc/ssl/certs/6378542.pem"
	I0701 12:24:26.233243  653531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6378542.pem
	I0701 12:24:26.241811  653531 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  1 12:11 /usr/share/ca-certificates/6378542.pem
	I0701 12:24:26.241888  653531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6378542.pem
	I0701 12:24:26.250559  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6378542.pem /etc/ssl/certs/3ec20f2e.0"
	I0701 12:24:26.277768  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0701 12:24:26.305594  653531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:24:26.315685  653531 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  1 12:05 /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:24:26.315763  653531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:24:26.330923  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0701 12:24:26.351095  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/637854.pem && ln -fs /usr/share/ca-certificates/637854.pem /etc/ssl/certs/637854.pem"
	I0701 12:24:26.374355  653531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/637854.pem
	I0701 12:24:26.380759  653531 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  1 12:11 /usr/share/ca-certificates/637854.pem
	I0701 12:24:26.380836  653531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/637854.pem
	I0701 12:24:26.392584  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/637854.pem /etc/ssl/certs/51391683.0"
	I0701 12:24:26.411160  653531 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0701 12:24:26.419483  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0701 12:24:26.437558  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0701 12:24:26.444826  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0701 12:24:26.454628  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0701 12:24:26.467473  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0701 12:24:26.476039  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0701 12:24:26.482296  653531 kubeadm.go:391] StartCluster: {Name:ha-735960 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-735960 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.60 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false
freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 12:24:26.482508  653531 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0701 12:24:26.498609  653531 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0701 12:24:26.509374  653531 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0701 12:24:26.509403  653531 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0701 12:24:26.509410  653531 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0701 12:24:26.509466  653531 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0701 12:24:26.518865  653531 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0701 12:24:26.519310  653531 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-735960" does not appear in /home/jenkins/minikube-integration/19166-630650/kubeconfig
	I0701 12:24:26.519460  653531 kubeconfig.go:62] /home/jenkins/minikube-integration/19166-630650/kubeconfig needs updating (will repair): [kubeconfig missing "ha-735960" cluster setting kubeconfig missing "ha-735960" context setting]
	I0701 12:24:26.519772  653531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19166-630650/kubeconfig: {Name:mke3ef9d019eff4edd273b00c416fd77ed009242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:24:26.520253  653531 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19166-630650/kubeconfig
	I0701 12:24:26.520566  653531 kapi.go:59] client config for ha-735960: &rest.Config{Host:"https://192.168.39.16:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/client.crt", KeyFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/client.key", CAFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(ni
l)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfbb80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0701 12:24:26.521041  653531 cert_rotation.go:137] Starting client certificate rotation controller
	I0701 12:24:26.521235  653531 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0701 12:24:26.530555  653531 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.16
	I0701 12:24:26.530586  653531 kubeadm.go:591] duration metric: took 21.167521ms to restartPrimaryControlPlane
	I0701 12:24:26.530596  653531 kubeadm.go:393] duration metric: took 48.31583ms to StartCluster
	I0701 12:24:26.530618  653531 settings.go:142] acquiring lock: {Name:mk6f7c85ea77a73ff0ac851454721f2e6e309153 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:24:26.530700  653531 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19166-630650/kubeconfig
	I0701 12:24:26.531272  653531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19166-630650/kubeconfig: {Name:mke3ef9d019eff4edd273b00c416fd77ed009242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:24:26.531528  653531 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:24:26.531554  653531 start.go:240] waiting for startup goroutines ...
	I0701 12:24:26.531572  653531 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0701 12:24:26.531767  653531 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:24:26.534496  653531 out.go:177] * Enabled addons: 
	I0701 12:24:26.535873  653531 addons.go:510] duration metric: took 4.304011ms for enable addons: enabled=[]
	I0701 12:24:26.535915  653531 start.go:245] waiting for cluster config update ...
	I0701 12:24:26.535925  653531 start.go:254] writing updated cluster config ...
	I0701 12:24:26.537498  653531 out.go:177] 
	I0701 12:24:26.539211  653531 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:24:26.539336  653531 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
	I0701 12:24:26.541509  653531 out.go:177] * Starting "ha-735960-m02" control-plane node in "ha-735960" cluster
	I0701 12:24:26.542802  653531 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 12:24:26.542833  653531 cache.go:56] Caching tarball of preloaded images
	I0701 12:24:26.542967  653531 preload.go:173] Found /home/jenkins/minikube-integration/19166-630650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0701 12:24:26.542983  653531 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0701 12:24:26.543093  653531 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
	I0701 12:24:26.543293  653531 start.go:360] acquireMachinesLock for ha-735960-m02: {Name:mk43a6c0c0c15a237623df377ad65b82186c0ad8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:24:26.543355  653531 start.go:364] duration metric: took 39.786µs to acquireMachinesLock for "ha-735960-m02"
	I0701 12:24:26.543382  653531 start.go:96] Skipping create...Using existing machine configuration
	I0701 12:24:26.543393  653531 fix.go:54] fixHost starting: m02
	I0701 12:24:26.543665  653531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:24:26.543694  653531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:24:26.558741  653531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34329
	I0701 12:24:26.559300  653531 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:24:26.559767  653531 main.go:141] libmachine: Using API Version  1
	I0701 12:24:26.559790  653531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:24:26.560107  653531 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:24:26.560324  653531 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:24:26.560471  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetState
	I0701 12:24:26.561903  653531 fix.go:112] recreateIfNeeded on ha-735960-m02: state=Stopped err=<nil>
	I0701 12:24:26.561933  653531 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	W0701 12:24:26.562104  653531 fix.go:138] unexpected machine state, will restart: <nil>
	I0701 12:24:26.564118  653531 out.go:177] * Restarting existing kvm2 VM for "ha-735960-m02" ...
	I0701 12:24:26.565547  653531 main.go:141] libmachine: (ha-735960-m02) Calling .Start
	I0701 12:24:26.565742  653531 main.go:141] libmachine: (ha-735960-m02) Ensuring networks are active...
	I0701 12:24:26.566439  653531 main.go:141] libmachine: (ha-735960-m02) Ensuring network default is active
	I0701 12:24:26.566739  653531 main.go:141] libmachine: (ha-735960-m02) Ensuring network mk-ha-735960 is active
	I0701 12:24:26.567095  653531 main.go:141] libmachine: (ha-735960-m02) Getting domain xml...
	I0701 12:24:26.567681  653531 main.go:141] libmachine: (ha-735960-m02) Creating domain...
	I0701 12:24:27.772734  653531 main.go:141] libmachine: (ha-735960-m02) Waiting to get IP...
	I0701 12:24:27.773478  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:27.773801  653531 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:24:27.773853  653531 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:24:27.773777  653719 retry.go:31] will retry after 217.058414ms: waiting for machine to come up
	I0701 12:24:27.992187  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:27.992715  653531 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:24:27.992745  653531 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:24:27.992653  653719 retry.go:31] will retry after 295.156992ms: waiting for machine to come up
	I0701 12:24:28.289101  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:28.289597  653531 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:24:28.289630  653531 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:24:28.289531  653719 retry.go:31] will retry after 353.406325ms: waiting for machine to come up
	I0701 12:24:28.644006  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:28.644479  653531 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:24:28.644510  653531 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:24:28.644437  653719 retry.go:31] will retry after 398.224689ms: waiting for machine to come up
	I0701 12:24:29.044072  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:29.044514  653531 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:24:29.044545  653531 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:24:29.044461  653719 retry.go:31] will retry after 547.020131ms: waiting for machine to come up
	I0701 12:24:29.593264  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:29.593690  653531 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:24:29.593709  653531 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:24:29.593653  653719 retry.go:31] will retry after 787.756844ms: waiting for machine to come up
	I0701 12:24:30.382731  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:30.383180  653531 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:24:30.383209  653531 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:24:30.383137  653719 retry.go:31] will retry after 870.067991ms: waiting for machine to come up
	I0701 12:24:31.254672  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:31.255252  653531 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:24:31.255285  653531 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:24:31.255205  653719 retry.go:31] will retry after 1.371479719s: waiting for machine to come up
	I0701 12:24:32.628605  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:32.629092  653531 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:24:32.629124  653531 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:24:32.629036  653719 retry.go:31] will retry after 1.347043223s: waiting for machine to come up
	I0701 12:24:33.978739  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:33.979246  653531 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:24:33.979275  653531 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:24:33.979195  653719 retry.go:31] will retry after 2.257830197s: waiting for machine to come up
	I0701 12:24:36.239828  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:36.240400  653531 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:24:36.240433  653531 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:24:36.240355  653719 retry.go:31] will retry after 2.834526493s: waiting for machine to come up
	I0701 12:24:39.078121  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:39.078416  653531 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:24:39.078448  653531 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:24:39.078379  653719 retry.go:31] will retry after 2.465969863s: waiting for machine to come up
	I0701 12:24:41.547043  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:41.547535  653531 main.go:141] libmachine: (ha-735960-m02) Found IP for machine: 192.168.39.86
	I0701 12:24:41.547569  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has current primary IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:41.547579  653531 main.go:141] libmachine: (ha-735960-m02) Reserving static IP address...
	I0701 12:24:41.547991  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "ha-735960-m02", mac: "52:54:00:0b:2f:ce", ip: "192.168.39.86"} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:41.548015  653531 main.go:141] libmachine: (ha-735960-m02) Reserved static IP address: 192.168.39.86
	I0701 12:24:41.548032  653531 main.go:141] libmachine: (ha-735960-m02) DBG | skip adding static IP to network mk-ha-735960 - found existing host DHCP lease matching {name: "ha-735960-m02", mac: "52:54:00:0b:2f:ce", ip: "192.168.39.86"}
	I0701 12:24:41.548045  653531 main.go:141] libmachine: (ha-735960-m02) DBG | Getting to WaitForSSH function...
	I0701 12:24:41.548059  653531 main.go:141] libmachine: (ha-735960-m02) Waiting for SSH to be available...
	I0701 12:24:41.550171  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:41.550523  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:41.550552  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:41.550644  653531 main.go:141] libmachine: (ha-735960-m02) DBG | Using SSH client type: external
	I0701 12:24:41.550670  653531 main.go:141] libmachine: (ha-735960-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa (-rw-------)
	I0701 12:24:41.550719  653531 main.go:141] libmachine: (ha-735960-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.86 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0701 12:24:41.550739  653531 main.go:141] libmachine: (ha-735960-m02) DBG | About to run SSH command:
	I0701 12:24:41.550754  653531 main.go:141] libmachine: (ha-735960-m02) DBG | exit 0
	I0701 12:24:41.678305  653531 main.go:141] libmachine: (ha-735960-m02) DBG | SSH cmd err, output: <nil>: 
	I0701 12:24:41.678691  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetConfigRaw
	I0701 12:24:41.679334  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetIP
	I0701 12:24:41.682006  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:41.682508  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:41.682540  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:41.682792  653531 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
	I0701 12:24:41.683005  653531 machine.go:94] provisionDockerMachine start ...
	I0701 12:24:41.683030  653531 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:24:41.683290  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:24:41.685599  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:41.685951  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:41.685979  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:41.686153  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:24:41.686378  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:41.686551  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:41.686684  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:24:41.686822  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:24:41.687030  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0701 12:24:41.687043  653531 main.go:141] libmachine: About to run SSH command:
	hostname
	I0701 12:24:41.802622  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0701 12:24:41.802657  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetMachineName
	I0701 12:24:41.802940  653531 buildroot.go:166] provisioning hostname "ha-735960-m02"
	I0701 12:24:41.802963  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetMachineName
	I0701 12:24:41.803281  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:24:41.805937  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:41.806443  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:41.806470  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:41.806608  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:24:41.806785  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:41.807003  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:41.807154  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:24:41.807371  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:24:41.807554  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0701 12:24:41.807567  653531 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-735960-m02 && echo "ha-735960-m02" | sudo tee /etc/hostname
	I0701 12:24:41.938306  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-735960-m02
	
	I0701 12:24:41.938353  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:24:41.941077  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:41.941535  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:41.941592  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:41.941765  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:24:41.941994  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:41.942161  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:41.942290  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:24:41.942491  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:24:41.942676  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0701 12:24:41.942701  653531 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-735960-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-735960-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-735960-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0701 12:24:42.062715  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0701 12:24:42.062750  653531 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19166-630650/.minikube CaCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19166-630650/.minikube}
	I0701 12:24:42.062772  653531 buildroot.go:174] setting up certificates
	I0701 12:24:42.062785  653531 provision.go:84] configureAuth start
	I0701 12:24:42.062795  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetMachineName
	I0701 12:24:42.063134  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetIP
	I0701 12:24:42.065907  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:42.066246  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:42.066279  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:42.066490  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:24:42.068450  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:42.068818  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:42.068843  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:42.068957  653531 provision.go:143] copyHostCerts
	I0701 12:24:42.068988  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem
	I0701 12:24:42.069022  653531 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem, removing ...
	I0701 12:24:42.069030  653531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem
	I0701 12:24:42.069082  653531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem (1078 bytes)
	I0701 12:24:42.069156  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem
	I0701 12:24:42.069173  653531 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem, removing ...
	I0701 12:24:42.069180  653531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem
	I0701 12:24:42.069199  653531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem (1123 bytes)
	I0701 12:24:42.069241  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem
	I0701 12:24:42.069257  653531 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem, removing ...
	I0701 12:24:42.069263  653531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem
	I0701 12:24:42.069279  653531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem (1675 bytes)
	I0701 12:24:42.069326  653531 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem org=jenkins.ha-735960-m02 san=[127.0.0.1 192.168.39.86 ha-735960-m02 localhost minikube]
	I0701 12:24:42.315961  653531 provision.go:177] copyRemoteCerts
	I0701 12:24:42.316035  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0701 12:24:42.316061  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:24:42.318992  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:42.319361  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:42.319395  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:42.319557  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:24:42.319740  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:42.319969  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:24:42.320092  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa Username:docker}
	I0701 12:24:42.408924  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0701 12:24:42.408999  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0701 12:24:42.434942  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0701 12:24:42.435038  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0701 12:24:42.458628  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0701 12:24:42.458728  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0701 12:24:42.482505  653531 provision.go:87] duration metric: took 419.705556ms to configureAuth
	I0701 12:24:42.482536  653531 buildroot.go:189] setting minikube options for container-runtime
	I0701 12:24:42.482760  653531 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:24:42.482797  653531 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:24:42.483103  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:24:42.485829  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:42.486249  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:42.486277  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:42.486574  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:24:42.486846  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:42.487031  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:42.487211  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:24:42.487420  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:24:42.487596  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0701 12:24:42.487608  653531 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0701 12:24:42.603937  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0701 12:24:42.603962  653531 buildroot.go:70] root file system type: tmpfs
	I0701 12:24:42.604101  653531 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0701 12:24:42.604123  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:24:42.606937  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:42.607326  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:42.607351  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:42.607512  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:24:42.607762  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:42.607935  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:42.608131  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:24:42.608318  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:24:42.608490  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0701 12:24:42.608578  653531 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.39.16"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0701 12:24:42.731927  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.39.16
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0701 12:24:42.731963  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:24:42.735092  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:42.735552  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:42.735586  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:42.735721  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:24:42.735916  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:42.736097  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:42.736226  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:24:42.736425  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:24:42.736596  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0701 12:24:42.736613  653531 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0701 12:24:44.641546  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0701 12:24:44.641584  653531 machine.go:97] duration metric: took 2.958558644s to provisionDockerMachine
	I0701 12:24:44.641601  653531 start.go:293] postStartSetup for "ha-735960-m02" (driver="kvm2")
	I0701 12:24:44.641615  653531 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0701 12:24:44.641637  653531 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:24:44.642004  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0701 12:24:44.642040  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:24:44.645224  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:44.645706  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:44.645738  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:44.645868  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:24:44.646053  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:44.646222  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:24:44.646376  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa Username:docker}
	I0701 12:24:44.736407  653531 ssh_runner.go:195] Run: cat /etc/os-release
	I0701 12:24:44.740656  653531 info.go:137] Remote host: Buildroot 2023.02.9
	I0701 12:24:44.740682  653531 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-630650/.minikube/addons for local assets ...
	I0701 12:24:44.740758  653531 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-630650/.minikube/files for local assets ...
	I0701 12:24:44.740835  653531 filesync.go:149] local asset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> 6378542.pem in /etc/ssl/certs
	I0701 12:24:44.740848  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> /etc/ssl/certs/6378542.pem
	I0701 12:24:44.740945  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0701 12:24:44.749928  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem --> /etc/ssl/certs/6378542.pem (1708 bytes)
	I0701 12:24:44.772391  653531 start.go:296] duration metric: took 130.772957ms for postStartSetup
	I0701 12:24:44.772467  653531 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:24:44.772787  653531 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0701 12:24:44.772824  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:24:44.775217  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:44.775582  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:44.775607  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:44.775804  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:24:44.776027  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:44.776203  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:24:44.776383  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa Username:docker}
	I0701 12:24:44.864587  653531 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0701 12:24:44.864665  653531 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0701 12:24:44.904439  653531 fix.go:56] duration metric: took 18.361036234s for fixHost
	I0701 12:24:44.904495  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:24:44.907382  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:44.907911  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:44.907944  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:44.908260  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:24:44.908504  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:44.908689  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:44.908847  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:24:44.909036  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:24:44.909257  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0701 12:24:44.909273  653531 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0701 12:24:45.022815  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719836684.998547011
	
	I0701 12:24:45.022845  653531 fix.go:216] guest clock: 1719836684.998547011
	I0701 12:24:45.022855  653531 fix.go:229] Guest: 2024-07-01 12:24:44.998547011 +0000 UTC Remote: 2024-07-01 12:24:44.904469964 +0000 UTC m=+42.374321626 (delta=94.077047ms)
	I0701 12:24:45.022878  653531 fix.go:200] guest clock delta is within tolerance: 94.077047ms
	I0701 12:24:45.022885  653531 start.go:83] releasing machines lock for "ha-735960-m02", held for 18.479517819s
	I0701 12:24:45.022904  653531 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:24:45.023158  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetIP
	I0701 12:24:45.025946  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:45.026429  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:45.026468  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:45.028669  653531 out.go:177] * Found network options:
	I0701 12:24:45.030344  653531 out.go:177]   - NO_PROXY=192.168.39.16
	W0701 12:24:45.031921  653531 proxy.go:119] fail to check proxy env: Error ip not in block
	I0701 12:24:45.031959  653531 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:24:45.032658  653531 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:24:45.032888  653531 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:24:45.033013  653531 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0701 12:24:45.033058  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	W0701 12:24:45.033081  653531 proxy.go:119] fail to check proxy env: Error ip not in block
	I0701 12:24:45.033171  653531 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0701 12:24:45.033195  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:24:45.035752  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:45.035981  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:45.036219  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:45.036245  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:45.036348  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:45.036378  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:45.036406  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:24:45.036593  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:24:45.036652  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:45.036754  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:45.036826  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:24:45.036903  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:24:45.036969  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa Username:docker}
	I0701 12:24:45.037025  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa Username:docker}
	W0701 12:24:45.137872  653531 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0701 12:24:45.137946  653531 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0701 12:24:45.154683  653531 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0701 12:24:45.154717  653531 start.go:494] detecting cgroup driver to use...
	I0701 12:24:45.154827  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 12:24:45.176886  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0701 12:24:45.188345  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0701 12:24:45.197947  653531 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0701 12:24:45.198012  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0701 12:24:45.207676  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0701 12:24:45.217559  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0701 12:24:45.227803  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0701 12:24:45.238295  653531 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0701 12:24:45.248764  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0701 12:24:45.258909  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0701 12:24:45.268726  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0701 12:24:45.279039  653531 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0701 12:24:45.288042  653531 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0701 12:24:45.296914  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:24:45.411404  653531 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0701 12:24:45.436012  653531 start.go:494] detecting cgroup driver to use...
	I0701 12:24:45.436122  653531 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0701 12:24:45.450142  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 12:24:45.462829  653531 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0701 12:24:45.481152  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 12:24:45.494283  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 12:24:45.507074  653531 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0701 12:24:45.534155  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 12:24:45.547185  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 12:24:45.564773  653531 ssh_runner.go:195] Run: which cri-dockerd
	I0701 12:24:45.568760  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0701 12:24:45.577542  653531 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0701 12:24:45.593021  653531 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0701 12:24:45.701211  653531 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0701 12:24:45.815750  653531 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0701 12:24:45.815810  653531 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0701 12:24:45.831989  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:24:45.941168  653531 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0701 12:24:48.340550  653531 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.399331326s)
	I0701 12:24:48.340643  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0701 12:24:48.354582  653531 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0701 12:24:48.370449  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0701 12:24:48.383634  653531 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0701 12:24:48.491334  653531 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0701 12:24:48.612412  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:24:48.742773  653531 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0701 12:24:48.759856  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0701 12:24:48.772621  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:24:48.884376  653531 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0701 12:24:48.964457  653531 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0701 12:24:48.964538  653531 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0701 12:24:48.970016  653531 start.go:562] Will wait 60s for crictl version
	I0701 12:24:48.970082  653531 ssh_runner.go:195] Run: which crictl
	I0701 12:24:48.974017  653531 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0701 12:24:49.010380  653531 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.1
	RuntimeApiVersion:  v1
	I0701 12:24:49.010470  653531 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0701 12:24:49.038204  653531 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0701 12:24:49.060452  653531 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.1 ...
	I0701 12:24:49.061662  653531 out.go:177]   - env NO_PROXY=192.168.39.16
	I0701 12:24:49.062894  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetIP
	I0701 12:24:49.065420  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:49.065726  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:49.065756  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:49.065973  653531 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0701 12:24:49.070110  653531 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 12:24:49.082188  653531 mustload.go:65] Loading cluster: ha-735960
	I0701 12:24:49.082530  653531 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:24:49.082941  653531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:24:49.082993  653531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:24:49.097892  653531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43153
	I0701 12:24:49.098396  653531 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:24:49.098894  653531 main.go:141] libmachine: Using API Version  1
	I0701 12:24:49.098917  653531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:24:49.099215  653531 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:24:49.099436  653531 main.go:141] libmachine: (ha-735960) Calling .GetState
	I0701 12:24:49.100798  653531 host.go:66] Checking if "ha-735960" exists ...
	I0701 12:24:49.101079  653531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:24:49.101112  653531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:24:49.115736  653531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34567
	I0701 12:24:49.116185  653531 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:24:49.116654  653531 main.go:141] libmachine: Using API Version  1
	I0701 12:24:49.116678  653531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:24:49.117007  653531 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:24:49.117203  653531 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:24:49.117366  653531 certs.go:68] Setting up /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960 for IP: 192.168.39.86
	I0701 12:24:49.117380  653531 certs.go:194] generating shared ca certs ...
	I0701 12:24:49.117398  653531 certs.go:226] acquiring lock for ca certs: {Name:mk34e166bfd069e523b2325e14d1812c523bff53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:24:49.117551  653531 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key
	I0701 12:24:49.117591  653531 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key
	I0701 12:24:49.117600  653531 certs.go:256] generating profile certs ...
	I0701 12:24:49.117669  653531 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/client.key
	I0701 12:24:49.117728  653531 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key.b19d6c48
	I0701 12:24:49.117760  653531 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.key
	I0701 12:24:49.117771  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0701 12:24:49.117786  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0701 12:24:49.117800  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0701 12:24:49.117811  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0701 12:24:49.117823  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0701 12:24:49.117835  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0701 12:24:49.117847  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0701 12:24:49.117858  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0701 12:24:49.117903  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem (1338 bytes)
	W0701 12:24:49.117934  653531 certs.go:480] ignoring /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854_empty.pem, impossibly tiny 0 bytes
	I0701 12:24:49.117946  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem (1675 bytes)
	I0701 12:24:49.117973  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem (1078 bytes)
	I0701 12:24:49.117994  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem (1123 bytes)
	I0701 12:24:49.118013  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem (1675 bytes)
	I0701 12:24:49.118048  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem (1708 bytes)
	I0701 12:24:49.118076  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem -> /usr/share/ca-certificates/637854.pem
	I0701 12:24:49.118092  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> /usr/share/ca-certificates/6378542.pem
	I0701 12:24:49.118104  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:24:49.118150  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:24:49.120907  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:49.121392  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:49.121418  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:49.121523  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:24:49.121694  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:49.121825  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:24:49.121959  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa Username:docker}
	I0701 12:24:49.190715  653531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0701 12:24:49.195755  653531 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0701 12:24:49.206197  653531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0701 12:24:49.209869  653531 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0701 12:24:49.219170  653531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0701 12:24:49.223114  653531 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0701 12:24:49.233000  653531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0701 12:24:49.237162  653531 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0701 12:24:49.246812  653531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0701 12:24:49.250554  653531 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0701 12:24:49.259926  653531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0701 12:24:49.263843  653531 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0701 12:24:49.274536  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0701 12:24:49.299467  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0701 12:24:49.322887  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0701 12:24:49.345311  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0701 12:24:49.367988  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0701 12:24:49.390632  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0701 12:24:49.416047  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0701 12:24:49.439560  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0701 12:24:49.462382  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem --> /usr/share/ca-certificates/637854.pem (1338 bytes)
	I0701 12:24:49.484590  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem --> /usr/share/ca-certificates/6378542.pem (1708 bytes)
	I0701 12:24:49.507507  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0701 12:24:49.529932  653531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0701 12:24:49.545966  653531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0701 12:24:49.561557  653531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0701 12:24:49.577402  653531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0701 12:24:49.593250  653531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0701 12:24:49.609739  653531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0701 12:24:49.626015  653531 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0701 12:24:49.643897  653531 ssh_runner.go:195] Run: openssl version
	I0701 12:24:49.649608  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/637854.pem && ln -fs /usr/share/ca-certificates/637854.pem /etc/ssl/certs/637854.pem"
	I0701 12:24:49.660203  653531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/637854.pem
	I0701 12:24:49.664449  653531 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  1 12:11 /usr/share/ca-certificates/637854.pem
	I0701 12:24:49.664503  653531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/637854.pem
	I0701 12:24:49.670228  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/637854.pem /etc/ssl/certs/51391683.0"
	I0701 12:24:49.680554  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6378542.pem && ln -fs /usr/share/ca-certificates/6378542.pem /etc/ssl/certs/6378542.pem"
	I0701 12:24:49.690901  653531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6378542.pem
	I0701 12:24:49.695200  653531 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  1 12:11 /usr/share/ca-certificates/6378542.pem
	I0701 12:24:49.695266  653531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6378542.pem
	I0701 12:24:49.700503  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6378542.pem /etc/ssl/certs/3ec20f2e.0"
	I0701 12:24:49.710442  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0701 12:24:49.720297  653531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:24:49.724530  653531 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  1 12:05 /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:24:49.724590  653531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:24:49.729832  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0701 12:24:49.739574  653531 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0701 12:24:49.743717  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0701 12:24:49.749498  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0701 12:24:49.755217  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0701 12:24:49.761210  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0701 12:24:49.767138  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0701 12:24:49.772853  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0701 12:24:49.778598  653531 kubeadm.go:928] updating node {m02 192.168.39.86 8443 v1.30.2 docker true true} ...
	I0701 12:24:49.778706  653531 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-735960-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.86
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-735960 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0701 12:24:49.778735  653531 kube-vip.go:115] generating kube-vip config ...
	I0701 12:24:49.778769  653531 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0701 12:24:49.792722  653531 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0701 12:24:49.792794  653531 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0701 12:24:49.792861  653531 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0701 12:24:49.804161  653531 binaries.go:44] Found k8s binaries, skipping transfer
	I0701 12:24:49.804241  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0701 12:24:49.814550  653531 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0701 12:24:49.831390  653531 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0701 12:24:49.848397  653531 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0701 12:24:49.865443  653531 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0701 12:24:49.869104  653531 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 12:24:49.880669  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:24:49.995061  653531 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0701 12:24:50.012084  653531 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:24:50.012461  653531 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:24:50.014165  653531 out.go:177] * Verifying Kubernetes components...
	I0701 12:24:50.015753  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:24:50.164868  653531 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0701 12:24:50.189841  653531 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19166-630650/kubeconfig
	I0701 12:24:50.190056  653531 kapi.go:59] client config for ha-735960: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/client.crt", KeyFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/client.key", CAFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfbb80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0701 12:24:50.190130  653531 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.16:8443
	I0701 12:24:50.190323  653531 node_ready.go:35] waiting up to 6m0s for node "ha-735960-m02" to be "Ready" ...
	I0701 12:24:50.190456  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:24:50.190466  653531 round_trippers.go:469] Request Headers:
	I0701 12:24:50.190477  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:24:50.190487  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:24:54.343288  653531 round_trippers.go:574] Response Status:  in 4152 milliseconds
	I0701 12:24:55.343662  653531 with_retry.go:234] Got a Retry-After 1s response for attempt 1 to https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:24:55.343730  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:24:55.343744  653531 round_trippers.go:469] Request Headers:
	I0701 12:24:55.343754  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:24:55.343758  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:24:55.344302  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:24:55.344422  653531 node_ready.go:53] error getting node "ha-735960-m02": Get "https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.1:52872->192.168.39.16:8443: read: connection reset by peer
	I0701 12:24:55.344514  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:24:55.344528  653531 round_trippers.go:469] Request Headers:
	I0701 12:24:55.344538  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:24:55.344544  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:24:55.344874  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:24:55.691490  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:24:55.691516  653531 round_trippers.go:469] Request Headers:
	I0701 12:24:55.691527  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:24:55.691533  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:24:55.691976  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:24:56.190655  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:24:56.190680  653531 round_trippers.go:469] Request Headers:
	I0701 12:24:56.190689  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:24:56.190694  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:24:56.191223  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:24:56.690634  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:24:56.690660  653531 round_trippers.go:469] Request Headers:
	I0701 12:24:56.690669  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:24:56.690672  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:24:56.691171  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:24:57.190543  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:24:57.190576  653531 round_trippers.go:469] Request Headers:
	I0701 12:24:57.190588  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:24:57.190593  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:24:57.191164  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:24:57.691155  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:24:57.691185  653531 round_trippers.go:469] Request Headers:
	I0701 12:24:57.691197  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:24:57.691205  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:24:57.691722  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:24:57.691807  653531 node_ready.go:53] error getting node "ha-735960-m02": Get "https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02": dial tcp 192.168.39.16:8443: connect: connection refused
	I0701 12:24:58.190799  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:24:58.190827  653531 round_trippers.go:469] Request Headers:
	I0701 12:24:58.190841  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:24:58.190847  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:24:58.191262  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:24:58.690909  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:24:58.690934  653531 round_trippers.go:469] Request Headers:
	I0701 12:24:58.690943  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:24:58.690947  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:24:58.691435  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:24:59.191343  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:24:59.191369  653531 round_trippers.go:469] Request Headers:
	I0701 12:24:59.191379  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:24:59.191385  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:24:59.191790  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:24:59.691540  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:24:59.691570  653531 round_trippers.go:469] Request Headers:
	I0701 12:24:59.691582  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:24:59.691587  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:24:59.692063  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:24:59.692155  653531 node_ready.go:53] error getting node "ha-735960-m02": Get "https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02": dial tcp 192.168.39.16:8443: connect: connection refused
	I0701 12:25:00.190742  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:00.190767  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:00.190776  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:00.190780  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:00.191351  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:00.691648  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:00.691679  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:00.691691  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:00.691697  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:00.692126  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:01.190745  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:01.190769  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:01.190778  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:01.190784  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:01.191282  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:01.691565  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:01.691597  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:01.691614  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:01.691621  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:01.692000  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:02.191662  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:02.191693  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:02.191706  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:02.191714  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:02.192140  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:02.192224  653531 node_ready.go:53] error getting node "ha-735960-m02": Get "https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02": dial tcp 192.168.39.16:8443: connect: connection refused
	I0701 12:25:02.691148  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:02.691173  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:02.691180  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:02.691185  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:02.691566  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:03.190561  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:03.190591  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:03.190603  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:03.190611  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:03.191147  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:03.690811  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:03.690839  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:03.690849  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:03.690854  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:03.691458  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:04.191099  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:04.191130  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:04.191142  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:04.191147  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:04.191609  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:04.691342  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:04.691368  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:04.691376  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:04.691380  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:04.691811  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:04.691897  653531 node_ready.go:53] error getting node "ha-735960-m02": Get "https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02": dial tcp 192.168.39.16:8443: connect: connection refused
	I0701 12:25:05.191508  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:05.191532  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:05.191540  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:05.191550  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:05.192027  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:05.690552  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:05.690579  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:05.690588  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:05.690592  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:05.691114  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:06.190741  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:06.190773  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:06.190785  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:06.190790  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:06.191210  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:06.690600  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:06.690630  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:06.690640  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:06.690646  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:06.691129  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:07.191607  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:07.191631  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:07.191639  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:07.191643  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:07.192193  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:07.192283  653531 node_ready.go:53] error getting node "ha-735960-m02": Get "https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02": dial tcp 192.168.39.16:8443: connect: connection refused
	I0701 12:25:07.691099  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:07.691129  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:07.691140  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:07.691145  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:07.691572  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:08.191598  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:08.191623  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:08.191632  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:08.191636  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:08.192026  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:08.690679  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:08.690702  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:08.690713  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:08.690717  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:08.691142  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:09.190900  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:09.190924  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:09.190932  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:09.190938  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:09.191395  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:09.690594  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:09.690615  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:09.690623  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:09.690629  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:11.690040  653531 round_trippers.go:574] Response Status: 200 OK in 1999 milliseconds
	I0701 12:25:11.702263  653531 node_ready.go:49] node "ha-735960-m02" has status "Ready":"True"
	I0701 12:25:11.702299  653531 node_ready.go:38] duration metric: took 21.511933368s for node "ha-735960-m02" to be "Ready" ...
	I0701 12:25:11.702313  653531 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 12:25:11.702416  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods
	I0701 12:25:11.702430  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:11.702441  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:11.702454  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:11.789461  653531 round_trippers.go:574] Response Status: 200 OK in 86 milliseconds
	I0701 12:25:11.802344  653531 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace to be "Ready" ...
	I0701 12:25:11.802466  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:11.802476  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:11.802483  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:11.802487  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:11.816015  653531 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0701 12:25:11.816768  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:11.816789  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:11.816801  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:11.816808  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:11.831063  653531 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0701 12:25:12.302968  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:12.302992  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:12.303000  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:12.303004  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:12.307067  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:12.308122  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:12.308138  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:12.308146  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:12.308150  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:12.311874  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:12.803638  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:12.803667  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:12.803679  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:12.803686  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:12.814049  653531 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0701 12:25:12.814887  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:12.814910  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:12.814921  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:12.814925  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:12.821738  653531 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0701 12:25:13.303576  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:13.303600  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:13.303608  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:13.303614  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:13.307218  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:13.308090  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:13.308106  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:13.308113  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:13.308117  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:13.311302  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:13.803234  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:13.803266  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:13.803274  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:13.803277  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:13.806287  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:13.807004  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:13.807020  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:13.807029  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:13.807032  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:13.809746  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:13.810211  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:14.302637  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:14.302668  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:14.302676  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:14.302680  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:14.306137  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:14.306904  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:14.306920  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:14.306928  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:14.306932  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:14.309754  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:14.802564  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:14.802587  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:14.802595  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:14.802599  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:14.808775  653531 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0701 12:25:14.809568  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:14.809588  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:14.809596  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:14.809601  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:14.812414  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:15.303353  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:15.303378  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:15.303386  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:15.303391  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:15.306881  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:15.307679  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:15.307702  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:15.307712  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:15.307721  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:15.310551  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:15.802545  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:15.802569  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:15.802577  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:15.802582  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:15.806303  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:15.807445  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:15.807462  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:15.807473  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:15.807479  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:15.813688  653531 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0701 12:25:15.814187  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:16.303627  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:16.303655  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:16.303664  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:16.303667  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:16.307153  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:16.307819  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:16.307838  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:16.307848  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:16.307854  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:16.317298  653531 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0701 12:25:16.802946  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:16.802971  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:16.802979  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:16.802985  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:16.806421  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:16.807100  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:16.807120  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:16.807130  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:16.807135  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:16.809697  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:17.302581  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:17.302628  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:17.302640  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:17.302648  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:17.307226  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:17.307905  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:17.307922  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:17.307929  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:17.307936  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:17.311203  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:17.803470  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:17.803514  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:17.803526  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:17.803531  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:17.812734  653531 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0701 12:25:17.813577  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:17.813595  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:17.813601  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:17.813608  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:17.818648  653531 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0701 12:25:17.819270  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:18.302575  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:18.302597  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:18.302605  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:18.302610  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:18.306847  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:18.307906  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:18.307927  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:18.307937  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:18.307943  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:18.310841  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:18.802657  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:18.802681  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:18.802689  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:18.802692  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:18.805685  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:18.806415  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:18.806434  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:18.806444  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:18.806451  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:18.809781  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:19.303618  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:19.303642  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:19.303650  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:19.303655  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:19.307473  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:19.308257  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:19.308275  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:19.308282  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:19.308286  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:19.311108  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:19.802669  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:19.802691  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:19.802700  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:19.802703  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:19.805915  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:19.806623  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:19.806641  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:19.806648  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:19.806653  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:19.809291  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:20.303135  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:20.303161  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:20.303169  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:20.303173  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:20.306861  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:20.307600  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:20.307618  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:20.307626  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:20.307630  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:20.310953  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:20.311503  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:20.803608  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:20.803633  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:20.803642  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:20.803645  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:20.807878  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:20.808941  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:20.808961  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:20.808969  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:20.808973  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:20.811817  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:21.303623  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:21.303648  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:21.303658  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:21.303662  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:21.307962  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:21.308821  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:21.308839  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:21.308846  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:21.308850  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:21.311792  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:21.803197  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:21.803227  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:21.803239  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:21.803244  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:21.806108  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:21.807085  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:21.807105  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:21.807138  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:21.807147  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:21.809757  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:22.302567  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:22.302593  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:22.302601  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:22.302608  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:22.306177  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:22.307066  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:22.307082  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:22.307091  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:22.307097  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:22.309849  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:22.803488  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:22.803511  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:22.803519  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:22.803523  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:22.807098  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:22.807809  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:22.807828  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:22.807839  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:22.807846  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:22.810906  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:22.811518  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:23.303611  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:23.303700  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:23.303719  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:23.303725  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:23.307759  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:23.308638  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:23.308659  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:23.308669  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:23.308674  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:23.312265  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:23.803188  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:23.803211  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:23.803222  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:23.803227  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:23.808854  653531 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0701 12:25:23.810030  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:23.810047  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:23.810057  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:23.810066  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:23.813689  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:24.303587  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:24.303609  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:24.303617  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:24.303622  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:24.306935  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:24.307770  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:24.307786  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:24.307794  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:24.307798  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:24.318402  653531 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0701 12:25:24.803269  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:24.803292  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:24.803302  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:24.803307  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:24.806559  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:24.807235  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:24.807252  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:24.807259  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:24.807264  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:24.809568  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:25.303424  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:25.303447  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:25.303457  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:25.303462  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:25.306169  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:25.306850  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:25.306869  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:25.306877  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:25.306881  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:25.309797  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:25.310316  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:25.803598  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:25.803625  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:25.803636  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:25.803641  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:25.807180  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:25.808080  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:25.808098  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:25.808106  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:25.808110  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:25.810694  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:26.303736  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:26.303758  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:26.303769  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:26.303774  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:26.307524  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:26.308268  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:26.308293  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:26.308304  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:26.308309  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:26.311520  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:26.803295  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:26.803319  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:26.803328  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:26.803332  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:26.806546  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:26.807183  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:26.807197  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:26.807204  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:26.807208  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:26.809974  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:27.302802  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:27.302827  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:27.302836  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:27.302840  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:27.305889  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:27.306573  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:27.306591  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:27.306598  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:27.306602  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:27.309203  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:27.802871  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:27.802896  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:27.802904  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:27.802908  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:27.806439  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:27.807255  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:27.807275  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:27.807283  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:27.807286  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:27.810137  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:27.810761  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:28.303255  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:28.303283  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:28.303295  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:28.303300  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:28.306809  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:28.307731  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:28.307752  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:28.307762  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:28.307768  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:28.311028  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:28.802544  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:28.802570  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:28.802580  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:28.802585  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:28.805960  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:28.806724  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:28.806740  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:28.806815  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:28.806826  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:28.809472  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:29.303397  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:29.303427  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:29.303438  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:29.303443  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:29.306785  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:29.307565  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:29.307584  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:29.307592  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:29.307596  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:29.310517  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:29.802683  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:29.802709  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:29.802717  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:29.802720  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:29.806680  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:29.807385  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:29.807404  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:29.807414  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:29.807420  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:29.810474  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:29.811143  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:30.303599  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:30.303629  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:30.303639  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:30.303643  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:30.307801  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:30.308475  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:30.308491  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:30.308498  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:30.308503  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:30.311947  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:30.802655  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:30.802680  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:30.802688  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:30.802692  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:30.806031  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:30.806743  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:30.806762  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:30.806769  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:30.806774  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:30.809315  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:31.303311  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:31.303340  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:31.303350  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:31.303354  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:31.306583  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:31.307361  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:31.307384  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:31.307395  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:31.307399  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:31.311058  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:31.802712  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:31.802740  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:31.802749  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:31.802753  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:31.806584  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:31.807317  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:31.807336  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:31.807347  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:31.807361  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:31.810401  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:32.303636  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:32.303663  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:32.303671  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:32.303676  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:32.307011  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:32.307797  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:32.307815  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:32.307825  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:32.307831  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:32.314944  653531 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0701 12:25:32.315492  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:32.802803  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:32.802830  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:32.802838  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:32.802844  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:32.807127  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:32.807884  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:32.807907  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:32.807917  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:32.807922  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:32.811565  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:33.303372  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:33.303399  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:33.303416  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:33.303421  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:33.307271  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:33.307961  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:33.307981  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:33.307988  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:33.308001  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:33.310760  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:33.802604  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:33.802631  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:33.802640  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:33.802643  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:33.806300  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:33.807219  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:33.807238  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:33.807245  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:33.807250  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:33.810578  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:34.303606  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:34.303632  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:34.303640  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:34.303644  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:34.308029  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:34.309132  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:34.309159  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:34.309172  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:34.309180  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:34.313056  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:34.803231  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:34.803261  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:34.803273  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:34.803278  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:34.806971  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:34.807591  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:34.807609  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:34.807617  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:34.807621  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:34.810457  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:34.810998  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:35.303350  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:35.303377  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:35.303386  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:35.303390  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:35.307557  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:35.310343  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:35.310361  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:35.310370  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:35.310374  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:35.314047  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:35.803318  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:35.803343  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:35.803352  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:35.803355  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:35.806663  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:35.807415  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:35.807435  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:35.807451  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:35.807460  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:35.810577  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:36.303513  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:36.303545  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:36.303577  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:36.303584  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:36.307367  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:36.308070  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:36.308089  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:36.308100  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:36.308106  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:36.312298  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:36.803266  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:36.803291  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:36.803299  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:36.803303  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:36.807158  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:36.807888  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:36.807906  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:36.807913  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:36.807918  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:36.811315  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:36.811752  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:37.303051  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:37.303079  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:37.303090  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:37.303094  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:37.307312  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:37.308243  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:37.308264  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:37.308275  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:37.308282  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:37.311883  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:37.802545  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:37.802572  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:37.802581  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:37.802585  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:37.805697  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:37.806592  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:37.806612  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:37.806622  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:37.806627  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:37.809149  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:38.302574  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:38.302602  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:38.302615  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:38.302621  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:38.306531  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:38.307159  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:38.307178  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:38.307189  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:38.307193  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:38.310496  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:38.803467  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:38.803495  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:38.803504  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:38.803509  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:38.807052  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:38.807927  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:38.807944  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:38.807951  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:38.807956  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:38.810712  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:39.302764  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:39.302790  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:39.302801  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:39.302805  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:39.306507  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:39.307614  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:39.307633  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:39.307641  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:39.307645  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:39.311327  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:39.311854  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:39.803193  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:39.803216  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:39.803225  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:39.803229  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:39.806519  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:39.807496  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:39.807515  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:39.807525  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:39.807532  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:39.810711  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:40.303599  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:40.303624  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:40.303633  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:40.303637  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:40.307414  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:40.308201  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:40.308227  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:40.308236  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:40.308242  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:40.313547  653531 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0701 12:25:40.803513  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:40.803535  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:40.803543  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:40.803548  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:40.806979  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:40.807738  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:40.807753  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:40.807761  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:40.807765  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:40.810649  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:41.303319  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:41.303343  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:41.303351  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:41.303355  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:41.307376  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:41.307943  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:41.307958  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:41.307965  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:41.307970  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:41.311161  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:41.803525  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:41.803549  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:41.803556  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:41.803559  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:41.806564  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:41.807431  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:41.807453  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:41.807464  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:41.807470  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:41.810527  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:41.811143  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:42.303619  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:42.303650  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:42.303662  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:42.303670  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:42.307838  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:42.308516  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:42.308536  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:42.308544  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:42.308550  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:42.312418  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:42.803505  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:42.803530  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:42.803540  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:42.803543  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:42.807116  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:42.808027  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:42.808044  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:42.808051  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:42.808055  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:42.810713  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:43.303632  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:43.303654  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:43.303664  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:43.303668  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:43.307247  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:43.307986  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:43.308002  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:43.308009  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:43.308013  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:43.310824  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:43.802592  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:43.802620  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:43.802628  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:43.802632  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:43.806238  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:43.807037  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:43.807059  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:43.807072  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:43.807076  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:43.809889  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:44.302994  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:44.303018  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:44.303026  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:44.303030  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:44.306644  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:44.307454  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:44.307470  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:44.307478  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:44.307482  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:44.311122  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:44.311762  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:44.803237  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:44.803267  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:44.803279  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:44.803286  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:44.807350  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:44.808020  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:44.808038  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:44.808045  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:44.808051  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:44.810846  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:45.302711  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:45.302735  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:45.302744  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:45.302748  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:45.306615  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:45.307478  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:45.307497  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:45.307508  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:45.307514  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:45.310453  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:45.803401  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:45.803428  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:45.803439  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:45.803444  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:45.807308  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:45.808014  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:45.808029  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:45.808036  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:45.808039  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:45.810822  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:46.302557  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:46.302584  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:46.302597  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:46.302601  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:46.306132  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:46.306862  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:46.306879  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:46.306888  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:46.306894  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:46.310611  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:46.803427  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:46.803455  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:46.803467  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:46.803474  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:46.807174  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:46.807896  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:46.807913  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:46.807921  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:46.807924  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:46.810938  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:46.811392  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:47.302820  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:47.302850  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:47.302859  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:47.302863  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:47.306419  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:47.307190  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:47.307211  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:47.307218  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:47.307222  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:47.309980  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:47.803501  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:47.803525  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:47.803534  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:47.803537  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:47.808075  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:47.808877  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:47.808896  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:47.808905  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:47.808910  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:47.815820  653531 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0701 12:25:48.302668  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:48.302699  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:48.302709  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:48.302716  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:48.308126  653531 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0701 12:25:48.308931  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:48.308949  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:48.308960  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:48.308965  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:48.317071  653531 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0701 12:25:48.802646  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:48.802669  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:48.802678  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:48.802682  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:48.807515  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:48.808381  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:48.808403  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:48.808413  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:48.808422  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:48.811034  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:48.811475  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:49.303193  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:49.303217  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:49.303225  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:49.303230  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:49.307574  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:49.308269  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:49.308285  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:49.308293  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:49.308297  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:49.312047  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:49.802745  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:49.802768  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:49.802776  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:49.802780  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:49.806546  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:49.807294  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:49.807313  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:49.807321  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:49.807326  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:49.810700  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:50.303644  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:50.303674  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:50.303684  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:50.303688  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:50.308034  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:50.308788  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:50.308807  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:50.308817  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:50.308823  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:50.313190  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:50.802959  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:50.802983  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:50.802992  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:50.802996  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:50.806875  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:50.807540  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:50.807558  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:50.807566  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:50.807571  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:50.810319  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:51.303292  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:51.303322  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:51.303334  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:51.303339  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:51.307067  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:51.307838  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:51.307858  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:51.307869  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:51.307875  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:51.312843  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:51.313579  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:51.803287  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:51.803312  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:51.803323  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:51.803329  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:51.807231  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:51.807995  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:51.808012  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:51.808020  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:51.808024  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:51.810740  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:52.303605  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:52.303629  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:52.303638  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:52.303643  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:52.306821  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:52.307565  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:52.307584  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:52.307594  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:52.307602  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:52.311075  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:52.803586  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:52.803610  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:52.803619  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:52.803623  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:52.807457  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:52.808236  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:52.808255  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:52.808266  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:52.808272  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:52.811703  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:53.303621  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:53.303644  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:53.303652  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:53.303656  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:53.310115  653531 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0701 12:25:53.310845  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:53.310863  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:53.310874  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:53.310878  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:53.313553  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:53.314016  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:53.803325  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:53.803349  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:53.803357  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:53.803361  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:53.806896  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:53.807585  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:53.807601  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:53.807608  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:53.807613  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:53.810245  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:54.302928  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:54.302952  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:54.302960  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:54.302963  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:54.306523  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:54.307165  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:54.307184  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:54.307195  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:54.307203  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:54.310455  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:54.803344  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:54.803367  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:54.803377  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:54.803380  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:54.806607  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:54.807210  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:54.807225  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:54.807233  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:54.807236  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:54.809746  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:55.303597  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:55.303623  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:55.303633  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:55.303637  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:55.307054  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:55.307759  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:55.307774  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:55.307781  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:55.307788  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:55.313043  653531 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0701 12:25:55.802698  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:55.802725  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:55.802736  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:55.802745  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:55.805918  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:55.806665  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:55.806682  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:55.806690  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:55.806694  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:55.809347  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:55.809833  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:56.303433  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:56.303460  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:56.303471  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:56.303479  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:56.307327  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:56.308094  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:56.308118  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:56.308126  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:56.308130  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:56.311241  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:56.803577  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:56.803605  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:56.803612  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:56.803616  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:56.806932  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:56.807699  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:56.807716  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:56.807724  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:56.807727  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:56.812547  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:57.303545  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:57.303573  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:57.303582  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:57.303586  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:57.307516  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:57.308162  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:57.308179  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:57.308186  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:57.308193  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:57.310961  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:57.803457  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:57.803482  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:57.803493  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:57.803500  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:57.807806  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:57.808679  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:57.808694  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:57.808704  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:57.808711  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:57.811544  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:57.811984  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:58.303446  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:58.303471  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:58.303480  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:58.303484  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:58.307082  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:58.307737  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:58.307754  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:58.307762  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:58.307770  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:58.310778  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:58.803647  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:58.803671  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:58.803680  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:58.803690  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:58.807621  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:58.808241  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:58.808258  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:58.808266  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:58.808271  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:58.811002  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:59.302934  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:59.302961  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:59.302971  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:59.302976  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:59.306476  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:59.307188  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:59.307205  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:59.307213  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:59.307216  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:59.312012  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:59.803004  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:59.803028  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:59.803037  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:59.803041  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:59.806220  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:59.807058  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:59.807077  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:59.807083  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:59.807087  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:59.810042  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:59.810618  653531 pod_ready.go:92] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"True"
	I0701 12:25:59.810639  653531 pod_ready.go:81] duration metric: took 48.008262746s for pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace to be "Ready" ...
	I0701 12:25:59.810648  653531 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-p4rtz" in "kube-system" namespace to be "Ready" ...
	I0701 12:25:59.810702  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-p4rtz
	I0701 12:25:59.810709  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:59.810716  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:59.810720  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:59.813396  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:59.813957  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:59.813972  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:59.813979  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:59.813982  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:59.816606  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:59.816994  653531 pod_ready.go:92] pod "coredns-7db6d8ff4d-p4rtz" in "kube-system" namespace has status "Ready":"True"
	I0701 12:25:59.817012  653531 pod_ready.go:81] duration metric: took 6.357752ms for pod "coredns-7db6d8ff4d-p4rtz" in "kube-system" namespace to be "Ready" ...
	I0701 12:25:59.817021  653531 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:25:59.817069  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960
	I0701 12:25:59.817076  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:59.817084  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:59.817090  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:59.819509  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:59.819970  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:59.819984  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:59.819991  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:59.819995  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:59.822382  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:59.822919  653531 pod_ready.go:92] pod "etcd-ha-735960" in "kube-system" namespace has status "Ready":"True"
	I0701 12:25:59.822941  653531 pod_ready.go:81] duration metric: took 5.912537ms for pod "etcd-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:25:59.822951  653531 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:25:59.823013  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m02
	I0701 12:25:59.823021  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:59.823028  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:59.823032  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:59.825241  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:59.825771  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:59.825785  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:59.825791  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:59.825795  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:59.828111  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:59.828706  653531 pod_ready.go:92] pod "etcd-ha-735960-m02" in "kube-system" namespace has status "Ready":"True"
	I0701 12:25:59.828725  653531 pod_ready.go:81] duration metric: took 5.760203ms for pod "etcd-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:25:59.828740  653531 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:25:59.828804  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:25:59.828813  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:59.828820  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:59.828827  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:59.832068  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:59.832863  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:25:59.832878  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:59.832885  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:59.832892  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:59.835452  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:59.835992  653531 pod_ready.go:97] node "ha-735960-m03" hosting pod "etcd-ha-735960-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-735960-m03" has status "Ready":"Unknown"
	I0701 12:25:59.836024  653531 pod_ready.go:81] duration metric: took 7.273472ms for pod "etcd-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	E0701 12:25:59.836031  653531 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-735960-m03" hosting pod "etcd-ha-735960-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-735960-m03" has status "Ready":"Unknown"
	I0701 12:25:59.836046  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:00.003492  653531 request.go:629] Waited for 167.376104ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960
	I0701 12:26:00.003566  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960
	I0701 12:26:00.003574  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:00.003585  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:00.003603  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:00.011681  653531 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0701 12:26:00.203578  653531 request.go:629] Waited for 191.210292ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:26:00.203641  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:26:00.203647  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:00.203654  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:00.203664  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:00.207391  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:00.207910  653531 pod_ready.go:92] pod "kube-apiserver-ha-735960" in "kube-system" namespace has status "Ready":"True"
	I0701 12:26:00.207934  653531 pod_ready.go:81] duration metric: took 371.877302ms for pod "kube-apiserver-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:00.207946  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:00.403020  653531 request.go:629] Waited for 194.98389ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m02
	I0701 12:26:00.403111  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m02
	I0701 12:26:00.403119  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:00.403141  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:00.403168  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:00.406515  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:00.603670  653531 request.go:629] Waited for 196.408497ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:26:00.603756  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:26:00.603766  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:00.603776  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:00.603787  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:00.607641  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:00.608254  653531 pod_ready.go:92] pod "kube-apiserver-ha-735960-m02" in "kube-system" namespace has status "Ready":"True"
	I0701 12:26:00.608279  653531 pod_ready.go:81] duration metric: took 400.3268ms for pod "kube-apiserver-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:00.608290  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:00.803335  653531 request.go:629] Waited for 194.970976ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:26:00.803416  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:26:00.803423  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:00.803432  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:00.803437  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:00.806887  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:01.003849  653531 request.go:629] Waited for 196.371058ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:01.003924  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:01.003931  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:01.003942  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:01.003947  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:01.007167  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:01.007625  653531 pod_ready.go:97] node "ha-735960-m03" hosting pod "kube-apiserver-ha-735960-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-735960-m03" has status "Ready":"Unknown"
	I0701 12:26:01.007649  653531 pod_ready.go:81] duration metric: took 399.353356ms for pod "kube-apiserver-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	E0701 12:26:01.007659  653531 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-735960-m03" hosting pod "kube-apiserver-ha-735960-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-735960-m03" has status "Ready":"Unknown"
	I0701 12:26:01.007667  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:01.203752  653531 request.go:629] Waited for 195.992128ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-735960
	I0701 12:26:01.203816  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-735960
	I0701 12:26:01.203821  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:01.203829  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:01.203835  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:01.207391  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:01.403364  653531 request.go:629] Waited for 195.371527ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:26:01.403446  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:26:01.403452  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:01.403460  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:01.403464  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:01.406768  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:01.407262  653531 pod_ready.go:92] pod "kube-controller-manager-ha-735960" in "kube-system" namespace has status "Ready":"True"
	I0701 12:26:01.407282  653531 pod_ready.go:81] duration metric: took 399.606397ms for pod "kube-controller-manager-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:01.407291  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:01.603806  653531 request.go:629] Waited for 196.426419ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-735960-m02
	I0701 12:26:01.603868  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-735960-m02
	I0701 12:26:01.603877  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:01.603885  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:01.603889  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:01.607133  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:01.803115  653531 request.go:629] Waited for 195.29931ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:26:01.803195  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:26:01.803202  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:01.803213  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:01.803220  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:01.806296  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:01.806997  653531 pod_ready.go:92] pod "kube-controller-manager-ha-735960-m02" in "kube-system" namespace has status "Ready":"True"
	I0701 12:26:01.807020  653531 pod_ready.go:81] duration metric: took 399.723075ms for pod "kube-controller-manager-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:01.807032  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:02.003077  653531 request.go:629] Waited for 195.935538ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-735960-m03
	I0701 12:26:02.003184  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-735960-m03
	I0701 12:26:02.003199  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:02.003212  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:02.003220  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:02.008458  653531 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0701 12:26:02.203469  653531 request.go:629] Waited for 194.368942ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:02.203529  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:02.203535  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:02.203542  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:02.203546  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:02.207148  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:02.207764  653531 pod_ready.go:97] node "ha-735960-m03" hosting pod "kube-controller-manager-ha-735960-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-735960-m03" has status "Ready":"Unknown"
	I0701 12:26:02.207791  653531 pod_ready.go:81] duration metric: took 400.749537ms for pod "kube-controller-manager-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	E0701 12:26:02.207804  653531 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-735960-m03" hosting pod "kube-controller-manager-ha-735960-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-735960-m03" has status "Ready":"Unknown"
	I0701 12:26:02.207816  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-25ssf" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:02.403791  653531 request.go:629] Waited for 195.887211ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-25ssf
	I0701 12:26:02.403858  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-25ssf
	I0701 12:26:02.403864  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:02.403874  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:02.403879  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:02.407843  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:02.603935  653531 request.go:629] Waited for 195.282891ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m04
	I0701 12:26:02.604003  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m04
	I0701 12:26:02.604008  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:02.604017  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:02.604024  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:02.607222  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:02.607681  653531 pod_ready.go:97] node "ha-735960-m04" hosting pod "kube-proxy-25ssf" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-735960-m04" has status "Ready":"Unknown"
	I0701 12:26:02.607701  653531 pod_ready.go:81] duration metric: took 399.872451ms for pod "kube-proxy-25ssf" in "kube-system" namespace to be "Ready" ...
	E0701 12:26:02.607710  653531 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-735960-m04" hosting pod "kube-proxy-25ssf" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-735960-m04" has status "Ready":"Unknown"
	I0701 12:26:02.607715  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-776rt" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:02.803135  653531 request.go:629] Waited for 195.335441ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-776rt
	I0701 12:26:02.803208  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-776rt
	I0701 12:26:02.803214  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:02.803221  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:02.803229  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:02.806089  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:03.004065  653531 request.go:629] Waited for 197.373789ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:03.004141  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:03.004150  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:03.004158  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:03.004174  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:03.007294  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:03.007921  653531 pod_ready.go:97] node "ha-735960-m03" hosting pod "kube-proxy-776rt" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-735960-m03" has status "Ready":"Unknown"
	I0701 12:26:03.007945  653531 pod_ready.go:81] duration metric: took 400.223567ms for pod "kube-proxy-776rt" in "kube-system" namespace to be "Ready" ...
	E0701 12:26:03.007955  653531 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-735960-m03" hosting pod "kube-proxy-776rt" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-735960-m03" has status "Ready":"Unknown"
	I0701 12:26:03.007961  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b6knb" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:03.204042  653531 request.go:629] Waited for 195.997795ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b6knb
	I0701 12:26:03.204129  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b6knb
	I0701 12:26:03.204135  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:03.204143  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:03.204151  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:03.207989  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:03.404038  653531 request.go:629] Waited for 195.374708ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:26:03.404108  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:26:03.404113  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:03.404122  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:03.404127  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:03.407364  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:03.407859  653531 pod_ready.go:92] pod "kube-proxy-b6knb" in "kube-system" namespace has status "Ready":"True"
	I0701 12:26:03.407879  653531 pod_ready.go:81] duration metric: took 399.911763ms for pod "kube-proxy-b6knb" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:03.407889  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lphzn" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:03.603040  653531 request.go:629] Waited for 195.068023ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lphzn
	I0701 12:26:03.603123  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lphzn
	I0701 12:26:03.603128  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:03.603137  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:03.603141  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:03.606547  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:03.803798  653531 request.go:629] Waited for 196.387613ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:26:03.803870  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:26:03.803875  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:03.803883  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:03.803888  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:03.807381  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:03.807877  653531 pod_ready.go:92] pod "kube-proxy-lphzn" in "kube-system" namespace has status "Ready":"True"
	I0701 12:26:03.807898  653531 pod_ready.go:81] duration metric: took 400.000751ms for pod "kube-proxy-lphzn" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:03.807907  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:04.004031  653531 request.go:629] Waited for 196.031388ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960
	I0701 12:26:04.004089  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960
	I0701 12:26:04.004095  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:04.004107  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:04.004115  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:04.007598  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:04.204058  653531 request.go:629] Waited for 195.850938ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:26:04.204148  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:26:04.204158  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:04.204172  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:04.204181  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:04.207457  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:04.208086  653531 pod_ready.go:92] pod "kube-scheduler-ha-735960" in "kube-system" namespace has status "Ready":"True"
	I0701 12:26:04.208102  653531 pod_ready.go:81] duration metric: took 400.189366ms for pod "kube-scheduler-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:04.208112  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:04.403245  653531 request.go:629] Waited for 195.048743ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960-m02
	I0701 12:26:04.403318  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960-m02
	I0701 12:26:04.403323  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:04.403331  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:04.403335  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:04.406662  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:04.603781  653531 request.go:629] Waited for 196.396031ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:26:04.603851  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:26:04.603858  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:04.603868  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:04.603872  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:04.607382  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:04.607837  653531 pod_ready.go:92] pod "kube-scheduler-ha-735960-m02" in "kube-system" namespace has status "Ready":"True"
	I0701 12:26:04.607857  653531 pod_ready.go:81] duration metric: took 399.737176ms for pod "kube-scheduler-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:04.607869  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:04.803931  653531 request.go:629] Waited for 195.967281ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960-m03
	I0701 12:26:04.804004  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960-m03
	I0701 12:26:04.804010  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:04.804018  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:04.804025  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:04.807572  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:05.003764  653531 request.go:629] Waited for 195.365798ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:05.003830  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:05.003836  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:05.003844  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:05.003852  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:05.006888  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:05.007360  653531 pod_ready.go:97] node "ha-735960-m03" hosting pod "kube-scheduler-ha-735960-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-735960-m03" has status "Ready":"Unknown"
	I0701 12:26:05.007379  653531 pod_ready.go:81] duration metric: took 399.502183ms for pod "kube-scheduler-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	E0701 12:26:05.007388  653531 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-735960-m03" hosting pod "kube-scheduler-ha-735960-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-735960-m03" has status "Ready":"Unknown"
	I0701 12:26:05.007396  653531 pod_ready.go:38] duration metric: took 53.305072048s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 12:26:05.007419  653531 api_server.go:52] waiting for apiserver process to appear ...
	I0701 12:26:05.007525  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 12:26:05.023687  653531 logs.go:276] 2 containers: [f615f587cb12 c36c1d459356]
	I0701 12:26:05.023779  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 12:26:05.041137  653531 logs.go:276] 2 containers: [68c63c4abd01 dff0f4abea41]
	I0701 12:26:05.041235  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 12:26:05.059910  653531 logs.go:276] 0 containers: []
	W0701 12:26:05.059939  653531 logs.go:278] No container was found matching "coredns"
	I0701 12:26:05.060005  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 12:26:05.076858  653531 logs.go:276] 2 containers: [279483668a9c 58811626a0de]
	I0701 12:26:05.076953  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 12:26:05.091973  653531 logs.go:276] 2 containers: [156169e4ac3c 2885f7cf6f93]
	I0701 12:26:05.092072  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 12:26:05.109350  653531 logs.go:276] 2 containers: [a72e102b5bf7 a1160a455902]
	I0701 12:26:05.109445  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 12:26:05.126947  653531 logs.go:276] 2 containers: [c8184f4bc096 8c3a5ac0cf85]
	I0701 12:26:05.127013  653531 logs.go:123] Gathering logs for container status ...
	I0701 12:26:05.127032  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 12:26:05.172758  653531 logs.go:123] Gathering logs for describe nodes ...
	I0701 12:26:05.172800  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 12:26:05.530082  653531 logs.go:123] Gathering logs for kube-apiserver [f615f587cb12] ...
	I0701 12:26:05.530114  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f615f587cb12"
	I0701 12:26:05.563833  653531 logs.go:123] Gathering logs for kube-apiserver [c36c1d459356] ...
	I0701 12:26:05.563866  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36c1d459356"
	I0701 12:26:05.633259  653531 logs.go:123] Gathering logs for etcd [dff0f4abea41] ...
	I0701 12:26:05.633305  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dff0f4abea41"
	I0701 12:26:05.672146  653531 logs.go:123] Gathering logs for kube-scheduler [58811626a0de] ...
	I0701 12:26:05.672187  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58811626a0de"
	I0701 12:26:05.693508  653531 logs.go:123] Gathering logs for kube-proxy [2885f7cf6f93] ...
	I0701 12:26:05.693553  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2885f7cf6f93"
	I0701 12:26:05.717857  653531 logs.go:123] Gathering logs for Docker ...
	I0701 12:26:05.717889  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 12:26:05.766696  653531 logs.go:123] Gathering logs for dmesg ...
	I0701 12:26:05.766736  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 12:26:05.781553  653531 logs.go:123] Gathering logs for kube-proxy [156169e4ac3c] ...
	I0701 12:26:05.781587  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 156169e4ac3c"
	I0701 12:26:05.807724  653531 logs.go:123] Gathering logs for kindnet [8c3a5ac0cf85] ...
	I0701 12:26:05.807758  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3a5ac0cf85"
	I0701 12:26:05.830042  653531 logs.go:123] Gathering logs for etcd [68c63c4abd01] ...
	I0701 12:26:05.830072  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68c63c4abd01"
	I0701 12:26:05.862525  653531 logs.go:123] Gathering logs for kube-controller-manager [a72e102b5bf7] ...
	I0701 12:26:05.862568  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a72e102b5bf7"
	I0701 12:26:05.901329  653531 logs.go:123] Gathering logs for kube-controller-manager [a1160a455902] ...
	I0701 12:26:05.901370  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1160a455902"
	I0701 12:26:05.942097  653531 logs.go:123] Gathering logs for kindnet [c8184f4bc096] ...
	I0701 12:26:05.942139  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8184f4bc096"
	I0701 12:26:05.964792  653531 logs.go:123] Gathering logs for kubelet ...
	I0701 12:26:05.964829  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 12:26:06.027347  653531 logs.go:123] Gathering logs for kube-scheduler [279483668a9c] ...
	I0701 12:26:06.027394  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 279483668a9c"
	I0701 12:26:08.550396  653531 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 12:26:08.565837  653531 api_server.go:72] duration metric: took 1m18.553699317s to wait for apiserver process to appear ...
	I0701 12:26:08.565866  653531 api_server.go:88] waiting for apiserver healthz status ...
	I0701 12:26:08.565941  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 12:26:08.584274  653531 logs.go:276] 2 containers: [f615f587cb12 c36c1d459356]
	I0701 12:26:08.584349  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 12:26:08.601551  653531 logs.go:276] 2 containers: [68c63c4abd01 dff0f4abea41]
	I0701 12:26:08.601633  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 12:26:08.619657  653531 logs.go:276] 0 containers: []
	W0701 12:26:08.619687  653531 logs.go:278] No container was found matching "coredns"
	I0701 12:26:08.619744  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 12:26:08.637393  653531 logs.go:276] 2 containers: [279483668a9c 58811626a0de]
	I0701 12:26:08.637473  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 12:26:08.662222  653531 logs.go:276] 2 containers: [156169e4ac3c 2885f7cf6f93]
	I0701 12:26:08.662307  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 12:26:08.678542  653531 logs.go:276] 2 containers: [a72e102b5bf7 a1160a455902]
	I0701 12:26:08.678649  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 12:26:08.698914  653531 logs.go:276] 2 containers: [c8184f4bc096 8c3a5ac0cf85]
	I0701 12:26:08.698956  653531 logs.go:123] Gathering logs for kube-scheduler [58811626a0de] ...
	I0701 12:26:08.698968  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58811626a0de"
	I0701 12:26:08.722744  653531 logs.go:123] Gathering logs for kube-controller-manager [a72e102b5bf7] ...
	I0701 12:26:08.722780  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a72e102b5bf7"
	I0701 12:26:08.767782  653531 logs.go:123] Gathering logs for kindnet [8c3a5ac0cf85] ...
	I0701 12:26:08.767825  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3a5ac0cf85"
	I0701 12:26:08.792700  653531 logs.go:123] Gathering logs for Docker ...
	I0701 12:26:08.792731  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 12:26:08.841902  653531 logs.go:123] Gathering logs for container status ...
	I0701 12:26:08.841943  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 12:26:08.885531  653531 logs.go:123] Gathering logs for kubelet ...
	I0701 12:26:08.885563  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 12:26:08.940130  653531 logs.go:123] Gathering logs for etcd [68c63c4abd01] ...
	I0701 12:26:08.940179  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68c63c4abd01"
	I0701 12:26:08.973841  653531 logs.go:123] Gathering logs for etcd [dff0f4abea41] ...
	I0701 12:26:08.973883  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dff0f4abea41"
	I0701 12:26:09.008785  653531 logs.go:123] Gathering logs for kube-apiserver [f615f587cb12] ...
	I0701 12:26:09.008824  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f615f587cb12"
	I0701 12:26:09.040512  653531 logs.go:123] Gathering logs for kube-apiserver [c36c1d459356] ...
	I0701 12:26:09.040568  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36c1d459356"
	I0701 12:26:09.135818  653531 logs.go:123] Gathering logs for kube-scheduler [279483668a9c] ...
	I0701 12:26:09.135876  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 279483668a9c"
	I0701 12:26:09.158758  653531 logs.go:123] Gathering logs for describe nodes ...
	I0701 12:26:09.158802  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 12:26:09.415637  653531 logs.go:123] Gathering logs for kube-proxy [2885f7cf6f93] ...
	I0701 12:26:09.415685  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2885f7cf6f93"
	I0701 12:26:09.438064  653531 logs.go:123] Gathering logs for kindnet [c8184f4bc096] ...
	I0701 12:26:09.438104  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8184f4bc096"
	I0701 12:26:09.463612  653531 logs.go:123] Gathering logs for dmesg ...
	I0701 12:26:09.463666  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 12:26:09.477906  653531 logs.go:123] Gathering logs for kube-proxy [156169e4ac3c] ...
	I0701 12:26:09.477936  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 156169e4ac3c"
	I0701 12:26:09.501662  653531 logs.go:123] Gathering logs for kube-controller-manager [a1160a455902] ...
	I0701 12:26:09.501704  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1160a455902"
	I0701 12:26:12.049246  653531 api_server.go:253] Checking apiserver healthz at https://192.168.39.16:8443/healthz ...
	I0701 12:26:12.055739  653531 api_server.go:279] https://192.168.39.16:8443/healthz returned 200:
	ok
	I0701 12:26:12.055824  653531 round_trippers.go:463] GET https://192.168.39.16:8443/version
	I0701 12:26:12.055829  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:12.055837  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:12.055841  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:12.056892  653531 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0701 12:26:12.057034  653531 api_server.go:141] control plane version: v1.30.2
	I0701 12:26:12.057055  653531 api_server.go:131] duration metric: took 3.491183076s to wait for apiserver health ...
	I0701 12:26:12.057064  653531 system_pods.go:43] waiting for kube-system pods to appear ...
	I0701 12:26:12.057160  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 12:26:12.074309  653531 logs.go:276] 2 containers: [f615f587cb12 c36c1d459356]
	I0701 12:26:12.074405  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 12:26:12.100040  653531 logs.go:276] 2 containers: [68c63c4abd01 dff0f4abea41]
	I0701 12:26:12.100116  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 12:26:12.119321  653531 logs.go:276] 0 containers: []
	W0701 12:26:12.119352  653531 logs.go:278] No container was found matching "coredns"
	I0701 12:26:12.119406  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 12:26:12.137547  653531 logs.go:276] 2 containers: [279483668a9c 58811626a0de]
	I0701 12:26:12.137660  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 12:26:12.157321  653531 logs.go:276] 2 containers: [156169e4ac3c 2885f7cf6f93]
	I0701 12:26:12.157417  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 12:26:12.182117  653531 logs.go:276] 2 containers: [a72e102b5bf7 a1160a455902]
	I0701 12:26:12.182204  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 12:26:12.204201  653531 logs.go:276] 2 containers: [c8184f4bc096 8c3a5ac0cf85]
	I0701 12:26:12.204247  653531 logs.go:123] Gathering logs for kube-proxy [2885f7cf6f93] ...
	I0701 12:26:12.204260  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2885f7cf6f93"
	I0701 12:26:12.228173  653531 logs.go:123] Gathering logs for kube-controller-manager [a72e102b5bf7] ...
	I0701 12:26:12.228206  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a72e102b5bf7"
	I0701 12:26:12.267264  653531 logs.go:123] Gathering logs for kindnet [c8184f4bc096] ...
	I0701 12:26:12.267309  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8184f4bc096"
	I0701 12:26:12.294504  653531 logs.go:123] Gathering logs for Docker ...
	I0701 12:26:12.294535  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 12:26:12.344610  653531 logs.go:123] Gathering logs for describe nodes ...
	I0701 12:26:12.344649  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 12:26:12.593887  653531 logs.go:123] Gathering logs for kube-apiserver [c36c1d459356] ...
	I0701 12:26:12.593927  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36c1d459356"
	I0701 12:26:12.665033  653531 logs.go:123] Gathering logs for kube-proxy [156169e4ac3c] ...
	I0701 12:26:12.665082  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 156169e4ac3c"
	I0701 12:26:12.687103  653531 logs.go:123] Gathering logs for container status ...
	I0701 12:26:12.687142  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 12:26:12.735851  653531 logs.go:123] Gathering logs for kubelet ...
	I0701 12:26:12.735886  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 12:26:12.793127  653531 logs.go:123] Gathering logs for kube-apiserver [f615f587cb12] ...
	I0701 12:26:12.793168  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f615f587cb12"
	I0701 12:26:12.823004  653531 logs.go:123] Gathering logs for kindnet [8c3a5ac0cf85] ...
	I0701 12:26:12.823037  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3a5ac0cf85"
	I0701 12:26:12.862610  653531 logs.go:123] Gathering logs for kube-scheduler [279483668a9c] ...
	I0701 12:26:12.862650  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 279483668a9c"
	I0701 12:26:12.883651  653531 logs.go:123] Gathering logs for kube-scheduler [58811626a0de] ...
	I0701 12:26:12.883685  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58811626a0de"
	I0701 12:26:12.905351  653531 logs.go:123] Gathering logs for kube-controller-manager [a1160a455902] ...
	I0701 12:26:12.905388  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1160a455902"
	I0701 12:26:12.938388  653531 logs.go:123] Gathering logs for dmesg ...
	I0701 12:26:12.938427  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 12:26:12.955609  653531 logs.go:123] Gathering logs for etcd [68c63c4abd01] ...
	I0701 12:26:12.955647  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68c63c4abd01"
	I0701 12:26:12.987593  653531 logs.go:123] Gathering logs for etcd [dff0f4abea41] ...
	I0701 12:26:12.987626  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dff0f4abea41"
	I0701 12:26:15.520590  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods
	I0701 12:26:15.520616  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:15.520625  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:15.520628  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:15.528299  653531 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0701 12:26:15.535569  653531 system_pods.go:59] 26 kube-system pods found
	I0701 12:26:15.535603  653531 system_pods.go:61] "coredns-7db6d8ff4d-nk4lf" [c03dd635-a82d-4f18-bd72-ec575f91867e] Running
	I0701 12:26:15.535608  653531 system_pods.go:61] "coredns-7db6d8ff4d-p4rtz" [267efba7-bf34-48d5-ab15-5bda45ff2f4f] Running
	I0701 12:26:15.535613  653531 system_pods.go:61] "etcd-ha-735960" [4b98745c-292f-42b5-977c-69c50fd241f1] Running
	I0701 12:26:15.535617  653531 system_pods.go:61] "etcd-ha-735960-m02" [fed8cdfa-8428-47e0-84ef-05297ad232f8] Running
	I0701 12:26:15.535622  653531 system_pods.go:61] "etcd-ha-735960-m03" [50b07bc3-ff6b-487d-8654-901d96892868] Running
	I0701 12:26:15.535625  653531 system_pods.go:61] "kindnet-2424m" [aa18d5dd-f6eb-4f04-a61e-b0b257e214af] Running
	I0701 12:26:15.535628  653531 system_pods.go:61] "kindnet-6gx8s" [7f46a773-a075-476c-9e54-89f125b4b57a] Running
	I0701 12:26:15.535631  653531 system_pods.go:61] "kindnet-7f6hm" [a8c302b4-1163-4d4f-bfe3-4fd3b5d23cf0] Running
	I0701 12:26:15.535633  653531 system_pods.go:61] "kindnet-bztzv" [7afa0e45-3d10-40bc-b422-7005a3ca9d3a] Running
	I0701 12:26:15.535636  653531 system_pods.go:61] "kube-apiserver-ha-735960" [ad041aaa-465a-4d8a-a8dc-b7665e1d587d] Running
	I0701 12:26:15.535640  653531 system_pods.go:61] "kube-apiserver-ha-735960-m02" [ba28f48e-1c18-47e3-ab11-a9b5588c5c32] Running
	I0701 12:26:15.535642  653531 system_pods.go:61] "kube-apiserver-ha-735960-m03" [baafa3bf-78ee-4269-9591-b0440927e055] Running
	I0701 12:26:15.535646  653531 system_pods.go:61] "kube-controller-manager-ha-735960" [3f0f0cf5-329d-47bc-b922-7583902e2607] Running
	I0701 12:26:15.535649  653531 system_pods.go:61] "kube-controller-manager-ha-735960-m02" [258fde18-ac5c-4446-842b-9465529b154c] Running
	I0701 12:26:15.535652  653531 system_pods.go:61] "kube-controller-manager-ha-735960-m03" [79acc56b-a9e0-4d4b-bc64-1a3a36ddf051] Running
	I0701 12:26:15.535655  653531 system_pods.go:61] "kube-proxy-25ssf" [11f0dc23-ab9d-4d39-988d-4c44dfde86cd] Running
	I0701 12:26:15.535658  653531 system_pods.go:61] "kube-proxy-776rt" [5666dac7-924e-4429-bd1d-a1a5647cc611] Running
	I0701 12:26:15.535661  653531 system_pods.go:61] "kube-proxy-b6knb" [eb36e930-5799-4ff7-821a-ccb22303cd1b] Running
	I0701 12:26:15.535664  653531 system_pods.go:61] "kube-proxy-lphzn" [0761a7a6-740e-4cde-9ab5-e02e8d417907] Running
	I0701 12:26:15.535667  653531 system_pods.go:61] "kube-scheduler-ha-735960" [c624cf42-a7d6-4aaf-859d-1aeaf29f9acb] Running
	I0701 12:26:15.535670  653531 system_pods.go:61] "kube-scheduler-ha-735960-m02" [7de78af7-2d79-46dc-bd34-f221d79fde06] Running
	I0701 12:26:15.535673  653531 system_pods.go:61] "kube-scheduler-ha-735960-m03" [9f9a2030-9332-44af-b8dc-3b4609e53f91] Running
	I0701 12:26:15.535676  653531 system_pods.go:61] "kube-vip-ha-735960" [4299679a-c145-4f4f-8ec6-3cd468b98ef1] Running
	I0701 12:26:15.535679  653531 system_pods.go:61] "kube-vip-ha-735960-m02" [1c9b13e1-515c-43c0-8d99-5ad1c1807727] Running
	I0701 12:26:15.535684  653531 system_pods.go:61] "kube-vip-ha-735960-m03" [7069ea7c-5461-4fe6-a969-97fe33396ebb] Running
	I0701 12:26:15.535688  653531 system_pods.go:61] "storage-provisioner" [f5c4f7f9-d648-4019-a5ea-6ce59f6c5663] Running
	I0701 12:26:15.535693  653531 system_pods.go:74] duration metric: took 3.47862483s to wait for pod list to return data ...
	I0701 12:26:15.535701  653531 default_sa.go:34] waiting for default service account to be created ...
	I0701 12:26:15.535798  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/default/serviceaccounts
	I0701 12:26:15.535809  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:15.535816  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:15.535820  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:15.539198  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:15.539410  653531 default_sa.go:45] found service account: "default"
	I0701 12:26:15.539425  653531 default_sa.go:55] duration metric: took 3.71568ms for default service account to be created ...
	I0701 12:26:15.539433  653531 system_pods.go:116] waiting for k8s-apps to be running ...
	I0701 12:26:15.539483  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods
	I0701 12:26:15.539490  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:15.539497  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:15.539503  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:15.547242  653531 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0701 12:26:15.553992  653531 system_pods.go:86] 26 kube-system pods found
	I0701 12:26:15.554026  653531 system_pods.go:89] "coredns-7db6d8ff4d-nk4lf" [c03dd635-a82d-4f18-bd72-ec575f91867e] Running
	I0701 12:26:15.554034  653531 system_pods.go:89] "coredns-7db6d8ff4d-p4rtz" [267efba7-bf34-48d5-ab15-5bda45ff2f4f] Running
	I0701 12:26:15.554040  653531 system_pods.go:89] "etcd-ha-735960" [4b98745c-292f-42b5-977c-69c50fd241f1] Running
	I0701 12:26:15.554046  653531 system_pods.go:89] "etcd-ha-735960-m02" [fed8cdfa-8428-47e0-84ef-05297ad232f8] Running
	I0701 12:26:15.554050  653531 system_pods.go:89] "etcd-ha-735960-m03" [50b07bc3-ff6b-487d-8654-901d96892868] Running
	I0701 12:26:15.554056  653531 system_pods.go:89] "kindnet-2424m" [aa18d5dd-f6eb-4f04-a61e-b0b257e214af] Running
	I0701 12:26:15.554062  653531 system_pods.go:89] "kindnet-6gx8s" [7f46a773-a075-476c-9e54-89f125b4b57a] Running
	I0701 12:26:15.554069  653531 system_pods.go:89] "kindnet-7f6hm" [a8c302b4-1163-4d4f-bfe3-4fd3b5d23cf0] Running
	I0701 12:26:15.554075  653531 system_pods.go:89] "kindnet-bztzv" [7afa0e45-3d10-40bc-b422-7005a3ca9d3a] Running
	I0701 12:26:15.554081  653531 system_pods.go:89] "kube-apiserver-ha-735960" [ad041aaa-465a-4d8a-a8dc-b7665e1d587d] Running
	I0701 12:26:15.554088  653531 system_pods.go:89] "kube-apiserver-ha-735960-m02" [ba28f48e-1c18-47e3-ab11-a9b5588c5c32] Running
	I0701 12:26:15.554099  653531 system_pods.go:89] "kube-apiserver-ha-735960-m03" [baafa3bf-78ee-4269-9591-b0440927e055] Running
	I0701 12:26:15.554107  653531 system_pods.go:89] "kube-controller-manager-ha-735960" [3f0f0cf5-329d-47bc-b922-7583902e2607] Running
	I0701 12:26:15.554115  653531 system_pods.go:89] "kube-controller-manager-ha-735960-m02" [258fde18-ac5c-4446-842b-9465529b154c] Running
	I0701 12:26:15.554123  653531 system_pods.go:89] "kube-controller-manager-ha-735960-m03" [79acc56b-a9e0-4d4b-bc64-1a3a36ddf051] Running
	I0701 12:26:15.554131  653531 system_pods.go:89] "kube-proxy-25ssf" [11f0dc23-ab9d-4d39-988d-4c44dfde86cd] Running
	I0701 12:26:15.554140  653531 system_pods.go:89] "kube-proxy-776rt" [5666dac7-924e-4429-bd1d-a1a5647cc611] Running
	I0701 12:26:15.554148  653531 system_pods.go:89] "kube-proxy-b6knb" [eb36e930-5799-4ff7-821a-ccb22303cd1b] Running
	I0701 12:26:15.554163  653531 system_pods.go:89] "kube-proxy-lphzn" [0761a7a6-740e-4cde-9ab5-e02e8d417907] Running
	I0701 12:26:15.554170  653531 system_pods.go:89] "kube-scheduler-ha-735960" [c624cf42-a7d6-4aaf-859d-1aeaf29f9acb] Running
	I0701 12:26:15.554176  653531 system_pods.go:89] "kube-scheduler-ha-735960-m02" [7de78af7-2d79-46dc-bd34-f221d79fde06] Running
	I0701 12:26:15.554183  653531 system_pods.go:89] "kube-scheduler-ha-735960-m03" [9f9a2030-9332-44af-b8dc-3b4609e53f91] Running
	I0701 12:26:15.554192  653531 system_pods.go:89] "kube-vip-ha-735960" [4299679a-c145-4f4f-8ec6-3cd468b98ef1] Running
	I0701 12:26:15.554199  653531 system_pods.go:89] "kube-vip-ha-735960-m02" [1c9b13e1-515c-43c0-8d99-5ad1c1807727] Running
	I0701 12:26:15.554207  653531 system_pods.go:89] "kube-vip-ha-735960-m03" [7069ea7c-5461-4fe6-a969-97fe33396ebb] Running
	I0701 12:26:15.554216  653531 system_pods.go:89] "storage-provisioner" [f5c4f7f9-d648-4019-a5ea-6ce59f6c5663] Running
	I0701 12:26:15.554229  653531 system_pods.go:126] duration metric: took 14.787055ms to wait for k8s-apps to be running ...
	I0701 12:26:15.554241  653531 system_svc.go:44] waiting for kubelet service to be running ....
	I0701 12:26:15.554296  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 12:26:15.567890  653531 system_svc.go:56] duration metric: took 13.638054ms WaitForService to wait for kubelet
	I0701 12:26:15.567925  653531 kubeadm.go:576] duration metric: took 1m25.555790211s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 12:26:15.567951  653531 node_conditions.go:102] verifying NodePressure condition ...
	I0701 12:26:15.568047  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes
	I0701 12:26:15.568057  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:15.568067  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:15.568074  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:15.575311  653531 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0701 12:26:15.577277  653531 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0701 12:26:15.577310  653531 node_conditions.go:123] node cpu capacity is 2
	I0701 12:26:15.577328  653531 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0701 12:26:15.577334  653531 node_conditions.go:123] node cpu capacity is 2
	I0701 12:26:15.577339  653531 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0701 12:26:15.577343  653531 node_conditions.go:123] node cpu capacity is 2
	I0701 12:26:15.577348  653531 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0701 12:26:15.577352  653531 node_conditions.go:123] node cpu capacity is 2
	I0701 12:26:15.577358  653531 node_conditions.go:105] duration metric: took 9.401356ms to run NodePressure ...
	I0701 12:26:15.577372  653531 start.go:240] waiting for startup goroutines ...
	I0701 12:26:15.577418  653531 start.go:254] writing updated cluster config ...
	I0701 12:26:15.579876  653531 out.go:177] 
	I0701 12:26:15.581466  653531 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:26:15.581562  653531 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
	I0701 12:26:15.583519  653531 out.go:177] * Starting "ha-735960-m03" control-plane node in "ha-735960" cluster
	I0701 12:26:15.584707  653531 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 12:26:15.584732  653531 cache.go:56] Caching tarball of preloaded images
	I0701 12:26:15.584831  653531 preload.go:173] Found /home/jenkins/minikube-integration/19166-630650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0701 12:26:15.584841  653531 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0701 12:26:15.584932  653531 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
	I0701 12:26:15.585716  653531 start.go:360] acquireMachinesLock for ha-735960-m03: {Name:mk43a6c0c0c15a237623df377ad65b82186c0ad8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:26:15.585768  653531 start.go:364] duration metric: took 28.47µs to acquireMachinesLock for "ha-735960-m03"
	I0701 12:26:15.585785  653531 start.go:96] Skipping create...Using existing machine configuration
	I0701 12:26:15.585798  653531 fix.go:54] fixHost starting: m03
	I0701 12:26:15.586107  653531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:26:15.586143  653531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:26:15.603500  653531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43455
	I0701 12:26:15.603962  653531 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:26:15.604555  653531 main.go:141] libmachine: Using API Version  1
	I0701 12:26:15.604579  653531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:26:15.604930  653531 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:26:15.605195  653531 main.go:141] libmachine: (ha-735960-m03) Calling .DriverName
	I0701 12:26:15.605384  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetState
	I0701 12:26:15.607018  653531 fix.go:112] recreateIfNeeded on ha-735960-m03: state=Stopped err=<nil>
	I0701 12:26:15.607042  653531 main.go:141] libmachine: (ha-735960-m03) Calling .DriverName
	W0701 12:26:15.607213  653531 fix.go:138] unexpected machine state, will restart: <nil>
	I0701 12:26:15.609349  653531 out.go:177] * Restarting existing kvm2 VM for "ha-735960-m03" ...
	I0701 12:26:15.610714  653531 main.go:141] libmachine: (ha-735960-m03) Calling .Start
	I0701 12:26:15.610921  653531 main.go:141] libmachine: (ha-735960-m03) Ensuring networks are active...
	I0701 12:26:15.611706  653531 main.go:141] libmachine: (ha-735960-m03) Ensuring network default is active
	I0701 12:26:15.612087  653531 main.go:141] libmachine: (ha-735960-m03) Ensuring network mk-ha-735960 is active
	I0701 12:26:15.612457  653531 main.go:141] libmachine: (ha-735960-m03) Getting domain xml...
	I0701 12:26:15.613082  653531 main.go:141] libmachine: (ha-735960-m03) Creating domain...
	I0701 12:26:16.855928  653531 main.go:141] libmachine: (ha-735960-m03) Waiting to get IP...
	I0701 12:26:16.856767  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:16.857131  653531 main.go:141] libmachine: (ha-735960-m03) DBG | unable to find current IP address of domain ha-735960-m03 in network mk-ha-735960
	I0701 12:26:16.857182  653531 main.go:141] libmachine: (ha-735960-m03) DBG | I0701 12:26:16.857114  654164 retry.go:31] will retry after 232.687433ms: waiting for machine to come up
	I0701 12:26:17.091660  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:17.092187  653531 main.go:141] libmachine: (ha-735960-m03) DBG | unable to find current IP address of domain ha-735960-m03 in network mk-ha-735960
	I0701 12:26:17.092229  653531 main.go:141] libmachine: (ha-735960-m03) DBG | I0701 12:26:17.092112  654164 retry.go:31] will retry after 320.051772ms: waiting for machine to come up
	I0701 12:26:17.413613  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:17.414125  653531 main.go:141] libmachine: (ha-735960-m03) DBG | unable to find current IP address of domain ha-735960-m03 in network mk-ha-735960
	I0701 12:26:17.414157  653531 main.go:141] libmachine: (ha-735960-m03) DBG | I0701 12:26:17.414063  654164 retry.go:31] will retry after 415.446228ms: waiting for machine to come up
	I0701 12:26:17.830725  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:17.831413  653531 main.go:141] libmachine: (ha-735960-m03) DBG | unable to find current IP address of domain ha-735960-m03 in network mk-ha-735960
	I0701 12:26:17.831445  653531 main.go:141] libmachine: (ha-735960-m03) DBG | I0701 12:26:17.831349  654164 retry.go:31] will retry after 522.707968ms: waiting for machine to come up
	I0701 12:26:18.356092  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:18.356521  653531 main.go:141] libmachine: (ha-735960-m03) DBG | unable to find current IP address of domain ha-735960-m03 in network mk-ha-735960
	I0701 12:26:18.356543  653531 main.go:141] libmachine: (ha-735960-m03) DBG | I0701 12:26:18.356485  654164 retry.go:31] will retry after 572.783424ms: waiting for machine to come up
	I0701 12:26:18.931377  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:18.931831  653531 main.go:141] libmachine: (ha-735960-m03) DBG | unable to find current IP address of domain ha-735960-m03 in network mk-ha-735960
	I0701 12:26:18.931856  653531 main.go:141] libmachine: (ha-735960-m03) DBG | I0701 12:26:18.931778  654164 retry.go:31] will retry after 662.269299ms: waiting for machine to come up
	I0701 12:26:19.595406  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:19.595831  653531 main.go:141] libmachine: (ha-735960-m03) DBG | unable to find current IP address of domain ha-735960-m03 in network mk-ha-735960
	I0701 12:26:19.595862  653531 main.go:141] libmachine: (ha-735960-m03) DBG | I0701 12:26:19.595779  654164 retry.go:31] will retry after 965.977644ms: waiting for machine to come up
	I0701 12:26:20.562930  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:20.563372  653531 main.go:141] libmachine: (ha-735960-m03) DBG | unable to find current IP address of domain ha-735960-m03 in network mk-ha-735960
	I0701 12:26:20.563432  653531 main.go:141] libmachine: (ha-735960-m03) DBG | I0701 12:26:20.563328  654164 retry.go:31] will retry after 1.166893605s: waiting for machine to come up
	I0701 12:26:21.731632  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:21.732082  653531 main.go:141] libmachine: (ha-735960-m03) DBG | unable to find current IP address of domain ha-735960-m03 in network mk-ha-735960
	I0701 12:26:21.732114  653531 main.go:141] libmachine: (ha-735960-m03) DBG | I0701 12:26:21.732040  654164 retry.go:31] will retry after 1.800222328s: waiting for machine to come up
	I0701 12:26:23.534948  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:23.535342  653531 main.go:141] libmachine: (ha-735960-m03) DBG | unable to find current IP address of domain ha-735960-m03 in network mk-ha-735960
	I0701 12:26:23.535372  653531 main.go:141] libmachine: (ha-735960-m03) DBG | I0701 12:26:23.535277  654164 retry.go:31] will retry after 1.820829305s: waiting for machine to come up
	I0701 12:26:25.357271  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:25.357677  653531 main.go:141] libmachine: (ha-735960-m03) DBG | unable to find current IP address of domain ha-735960-m03 in network mk-ha-735960
	I0701 12:26:25.357701  653531 main.go:141] libmachine: (ha-735960-m03) DBG | I0701 12:26:25.357630  654164 retry.go:31] will retry after 1.816274117s: waiting for machine to come up
	I0701 12:26:27.176155  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:27.176621  653531 main.go:141] libmachine: (ha-735960-m03) DBG | unable to find current IP address of domain ha-735960-m03 in network mk-ha-735960
	I0701 12:26:27.176653  653531 main.go:141] libmachine: (ha-735960-m03) DBG | I0701 12:26:27.176598  654164 retry.go:31] will retry after 2.782602178s: waiting for machine to come up
	I0701 12:26:29.960991  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:29.961388  653531 main.go:141] libmachine: (ha-735960-m03) DBG | unable to find current IP address of domain ha-735960-m03 in network mk-ha-735960
	I0701 12:26:29.961421  653531 main.go:141] libmachine: (ha-735960-m03) DBG | I0701 12:26:29.961334  654164 retry.go:31] will retry after 3.816886888s: waiting for machine to come up
	I0701 12:26:33.779810  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:33.780404  653531 main.go:141] libmachine: (ha-735960-m03) Found IP for machine: 192.168.39.97
	I0701 12:26:33.780436  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has current primary IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:33.780448  653531 main.go:141] libmachine: (ha-735960-m03) Reserving static IP address...
	I0701 12:26:33.780953  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "ha-735960-m03", mac: "52:54:00:93:88:f2", ip: "192.168.39.97"} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:33.780975  653531 main.go:141] libmachine: (ha-735960-m03) DBG | skip adding static IP to network mk-ha-735960 - found existing host DHCP lease matching {name: "ha-735960-m03", mac: "52:54:00:93:88:f2", ip: "192.168.39.97"}
	I0701 12:26:33.780986  653531 main.go:141] libmachine: (ha-735960-m03) Reserved static IP address: 192.168.39.97
	I0701 12:26:33.780995  653531 main.go:141] libmachine: (ha-735960-m03) Waiting for SSH to be available...
	I0701 12:26:33.781005  653531 main.go:141] libmachine: (ha-735960-m03) DBG | Getting to WaitForSSH function...
	I0701 12:26:33.783239  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:33.783609  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:33.783636  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:33.783742  653531 main.go:141] libmachine: (ha-735960-m03) DBG | Using SSH client type: external
	I0701 12:26:33.783770  653531 main.go:141] libmachine: (ha-735960-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m03/id_rsa (-rw-------)
	I0701 12:26:33.783810  653531 main.go:141] libmachine: (ha-735960-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.97 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0701 12:26:33.783825  653531 main.go:141] libmachine: (ha-735960-m03) DBG | About to run SSH command:
	I0701 12:26:33.783839  653531 main.go:141] libmachine: (ha-735960-m03) DBG | exit 0
	I0701 12:26:33.906528  653531 main.go:141] libmachine: (ha-735960-m03) DBG | SSH cmd err, output: <nil>: 
	I0701 12:26:33.906854  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetConfigRaw
	I0701 12:26:33.907659  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetIP
	I0701 12:26:33.910504  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:33.910919  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:33.910952  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:33.911199  653531 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
	I0701 12:26:33.911468  653531 machine.go:94] provisionDockerMachine start ...
	I0701 12:26:33.911493  653531 main.go:141] libmachine: (ha-735960-m03) Calling .DriverName
	I0701 12:26:33.911726  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHHostname
	I0701 12:26:33.913742  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:33.914049  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:33.914079  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:33.914213  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHPort
	I0701 12:26:33.914440  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:33.914614  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:33.914781  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHUsername
	I0701 12:26:33.914952  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:26:33.915169  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0701 12:26:33.915186  653531 main.go:141] libmachine: About to run SSH command:
	hostname
	I0701 12:26:34.022720  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0701 12:26:34.022751  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetMachineName
	I0701 12:26:34.023048  653531 buildroot.go:166] provisioning hostname "ha-735960-m03"
	I0701 12:26:34.023086  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetMachineName
	I0701 12:26:34.023302  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHHostname
	I0701 12:26:34.026253  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:34.026699  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:34.026731  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:34.026891  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHPort
	I0701 12:26:34.027100  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:34.027330  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:34.027468  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHUsername
	I0701 12:26:34.027637  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:26:34.027853  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0701 12:26:34.027872  653531 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-735960-m03 && echo "ha-735960-m03" | sudo tee /etc/hostname
	I0701 12:26:34.143884  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-735960-m03
	
	I0701 12:26:34.143919  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHHostname
	I0701 12:26:34.146876  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:34.147233  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:34.147259  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:34.147410  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHPort
	I0701 12:26:34.147595  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:34.147764  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:34.147906  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHUsername
	I0701 12:26:34.148107  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:26:34.148271  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0701 12:26:34.148287  653531 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-735960-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-735960-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-735960-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0701 12:26:34.259290  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0701 12:26:34.259326  653531 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19166-630650/.minikube CaCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19166-630650/.minikube}
	I0701 12:26:34.259348  653531 buildroot.go:174] setting up certificates
	I0701 12:26:34.259361  653531 provision.go:84] configureAuth start
	I0701 12:26:34.259373  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetMachineName
	I0701 12:26:34.259700  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetIP
	I0701 12:26:34.262660  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:34.263056  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:34.263088  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:34.263229  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHHostname
	I0701 12:26:34.265709  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:34.266104  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:34.266129  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:34.266291  653531 provision.go:143] copyHostCerts
	I0701 12:26:34.266320  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem
	I0701 12:26:34.266385  653531 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem, removing ...
	I0701 12:26:34.266399  653531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem
	I0701 12:26:34.266510  653531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem (1078 bytes)
	I0701 12:26:34.266616  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem
	I0701 12:26:34.266642  653531 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem, removing ...
	I0701 12:26:34.266651  653531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem
	I0701 12:26:34.266687  653531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem (1123 bytes)
	I0701 12:26:34.266758  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem
	I0701 12:26:34.266785  653531 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem, removing ...
	I0701 12:26:34.266794  653531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem
	I0701 12:26:34.266828  653531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem (1675 bytes)
	I0701 12:26:34.266895  653531 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem org=jenkins.ha-735960-m03 san=[127.0.0.1 192.168.39.97 ha-735960-m03 localhost minikube]
	I0701 12:26:34.565581  653531 provision.go:177] copyRemoteCerts
	I0701 12:26:34.565649  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0701 12:26:34.565676  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHHostname
	I0701 12:26:34.568539  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:34.568839  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:34.568870  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:34.569025  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHPort
	I0701 12:26:34.569261  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:34.569428  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHUsername
	I0701 12:26:34.569588  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m03/id_rsa Username:docker}
	I0701 12:26:34.652136  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0701 12:26:34.652230  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0701 12:26:34.676227  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0701 12:26:34.676305  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0701 12:26:34.699234  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0701 12:26:34.699313  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0701 12:26:34.721885  653531 provision.go:87] duration metric: took 462.509686ms to configureAuth
	I0701 12:26:34.721915  653531 buildroot.go:189] setting minikube options for container-runtime
	I0701 12:26:34.722137  653531 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:26:34.722181  653531 main.go:141] libmachine: (ha-735960-m03) Calling .DriverName
	I0701 12:26:34.722494  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHHostname
	I0701 12:26:34.725227  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:34.725601  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:34.725629  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:34.725789  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHPort
	I0701 12:26:34.725994  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:34.726175  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:34.726384  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHUsername
	I0701 12:26:34.726572  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:26:34.726794  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0701 12:26:34.726809  653531 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0701 12:26:34.831674  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0701 12:26:34.831699  653531 buildroot.go:70] root file system type: tmpfs
	I0701 12:26:34.831846  653531 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0701 12:26:34.831923  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHHostname
	I0701 12:26:34.835107  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:34.835603  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:34.835626  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:34.835928  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHPort
	I0701 12:26:34.836184  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:34.836401  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:34.836577  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHUsername
	I0701 12:26:34.836754  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:26:34.836963  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0701 12:26:34.837056  653531 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.39.16"
	Environment="NO_PROXY=192.168.39.16,192.168.39.86"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0701 12:26:34.951789  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.39.16
	Environment=NO_PROXY=192.168.39.16,192.168.39.86
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0701 12:26:34.951830  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHHostname
	I0701 12:26:34.954854  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:34.955349  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:34.955376  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:34.955552  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHPort
	I0701 12:26:34.955761  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:34.955952  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:34.956104  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHUsername
	I0701 12:26:34.956269  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:26:34.956436  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0701 12:26:34.956451  653531 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0701 12:26:36.820196  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0701 12:26:36.820235  653531 machine.go:97] duration metric: took 2.908749821s to provisionDockerMachine
	I0701 12:26:36.820254  653531 start.go:293] postStartSetup for "ha-735960-m03" (driver="kvm2")
	I0701 12:26:36.820269  653531 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0701 12:26:36.820322  653531 main.go:141] libmachine: (ha-735960-m03) Calling .DriverName
	I0701 12:26:36.820717  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0701 12:26:36.820758  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHHostname
	I0701 12:26:36.823679  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:36.824131  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:36.824158  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:36.824315  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHPort
	I0701 12:26:36.824557  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:36.824862  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHUsername
	I0701 12:26:36.825025  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m03/id_rsa Username:docker}
	I0701 12:26:36.909262  653531 ssh_runner.go:195] Run: cat /etc/os-release
	I0701 12:26:36.913798  653531 info.go:137] Remote host: Buildroot 2023.02.9
	I0701 12:26:36.913830  653531 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-630650/.minikube/addons for local assets ...
	I0701 12:26:36.913904  653531 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-630650/.minikube/files for local assets ...
	I0701 12:26:36.913973  653531 filesync.go:149] local asset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> 6378542.pem in /etc/ssl/certs
	I0701 12:26:36.913983  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> /etc/ssl/certs/6378542.pem
	I0701 12:26:36.914063  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0701 12:26:36.924147  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem --> /etc/ssl/certs/6378542.pem (1708 bytes)
	I0701 12:26:36.949103  653531 start.go:296] duration metric: took 128.830664ms for postStartSetup
	I0701 12:26:36.949169  653531 main.go:141] libmachine: (ha-735960-m03) Calling .DriverName
	I0701 12:26:36.949541  653531 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0701 12:26:36.949572  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHHostname
	I0701 12:26:36.952321  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:36.952670  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:36.952703  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:36.952895  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHPort
	I0701 12:26:36.953116  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:36.953299  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHUsername
	I0701 12:26:36.953494  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m03/id_rsa Username:docker}
	I0701 12:26:37.037086  653531 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0701 12:26:37.037223  653531 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0701 12:26:37.097170  653531 fix.go:56] duration metric: took 21.511363009s for fixHost
	I0701 12:26:37.097229  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHHostname
	I0701 12:26:37.100519  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:37.100936  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:37.100988  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:37.101235  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHPort
	I0701 12:26:37.101494  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:37.101681  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:37.101864  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHUsername
	I0701 12:26:37.102058  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:26:37.102248  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0701 12:26:37.102261  653531 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0701 12:26:37.210872  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719836797.190240924
	
	I0701 12:26:37.210897  653531 fix.go:216] guest clock: 1719836797.190240924
	I0701 12:26:37.210906  653531 fix.go:229] Guest: 2024-07-01 12:26:37.190240924 +0000 UTC Remote: 2024-07-01 12:26:37.09720405 +0000 UTC m=+154.567055715 (delta=93.036874ms)
	I0701 12:26:37.210928  653531 fix.go:200] guest clock delta is within tolerance: 93.036874ms
	I0701 12:26:37.210935  653531 start.go:83] releasing machines lock for "ha-735960-m03", held for 21.625157566s
	I0701 12:26:37.210966  653531 main.go:141] libmachine: (ha-735960-m03) Calling .DriverName
	I0701 12:26:37.211304  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetIP
	I0701 12:26:37.213807  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:37.214222  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:37.214255  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:37.216716  653531 out.go:177] * Found network options:
	I0701 12:26:37.218305  653531 out.go:177]   - NO_PROXY=192.168.39.16,192.168.39.86
	W0701 12:26:37.219816  653531 proxy.go:119] fail to check proxy env: Error ip not in block
	W0701 12:26:37.219845  653531 proxy.go:119] fail to check proxy env: Error ip not in block
	I0701 12:26:37.219865  653531 main.go:141] libmachine: (ha-735960-m03) Calling .DriverName
	I0701 12:26:37.220522  653531 main.go:141] libmachine: (ha-735960-m03) Calling .DriverName
	I0701 12:26:37.220737  653531 main.go:141] libmachine: (ha-735960-m03) Calling .DriverName
	I0701 12:26:37.220844  653531 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0701 12:26:37.220887  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHHostname
	W0701 12:26:37.220953  653531 proxy.go:119] fail to check proxy env: Error ip not in block
	W0701 12:26:37.220981  653531 proxy.go:119] fail to check proxy env: Error ip not in block
	I0701 12:26:37.221057  653531 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0701 12:26:37.221077  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHHostname
	I0701 12:26:37.223616  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:37.223976  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:37.224003  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:37.224022  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:37.224163  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHPort
	I0701 12:26:37.224349  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:37.224476  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:37.224495  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:37.224522  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHUsername
	I0701 12:26:37.224684  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHPort
	I0701 12:26:37.224708  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m03/id_rsa Username:docker}
	I0701 12:26:37.224822  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:37.224957  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHUsername
	I0701 12:26:37.225089  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m03/id_rsa Username:docker}
	W0701 12:26:37.324512  653531 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0701 12:26:37.324590  653531 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0701 12:26:37.342354  653531 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0701 12:26:37.342401  653531 start.go:494] detecting cgroup driver to use...
	I0701 12:26:37.342553  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 12:26:37.361964  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0701 12:26:37.372356  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0701 12:26:37.382741  653531 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0701 12:26:37.382800  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0701 12:26:37.393672  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0701 12:26:37.404182  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0701 12:26:37.413967  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0701 12:26:37.425102  653531 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0701 12:26:37.436486  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0701 12:26:37.448119  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0701 12:26:37.459499  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0701 12:26:37.470904  653531 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0701 12:26:37.480202  653531 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0701 12:26:37.489935  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:26:37.612275  653531 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0701 12:26:37.635575  653531 start.go:494] detecting cgroup driver to use...
	I0701 12:26:37.635692  653531 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0701 12:26:37.653571  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 12:26:37.670438  653531 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0701 12:26:37.688000  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 12:26:37.705115  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 12:26:37.718914  653531 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0701 12:26:37.744858  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 12:26:37.759980  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 12:26:37.779721  653531 ssh_runner.go:195] Run: which cri-dockerd
	I0701 12:26:37.783771  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0701 12:26:37.794141  653531 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0701 12:26:37.811510  653531 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0701 12:26:37.931976  653531 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0701 12:26:38.066164  653531 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0701 12:26:38.066230  653531 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0701 12:26:38.083572  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:26:38.206358  653531 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0701 12:26:40.648995  653531 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.442581628s)
	I0701 12:26:40.649094  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0701 12:26:40.663523  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0701 12:26:40.678231  653531 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0701 12:26:40.794839  653531 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0701 12:26:40.936707  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:26:41.068605  653531 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0701 12:26:41.086480  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0701 12:26:41.102238  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:26:41.225877  653531 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0701 12:26:41.309074  653531 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0701 12:26:41.309144  653531 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0701 12:26:41.314764  653531 start.go:562] Will wait 60s for crictl version
	I0701 12:26:41.314839  653531 ssh_runner.go:195] Run: which crictl
	I0701 12:26:41.318792  653531 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0701 12:26:41.356836  653531 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.1
	RuntimeApiVersion:  v1
	I0701 12:26:41.356927  653531 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0701 12:26:41.383790  653531 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0701 12:26:41.409143  653531 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.1 ...
	I0701 12:26:41.410603  653531 out.go:177]   - env NO_PROXY=192.168.39.16
	I0701 12:26:41.412215  653531 out.go:177]   - env NO_PROXY=192.168.39.16,192.168.39.86
	I0701 12:26:41.413404  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetIP
	I0701 12:26:41.416274  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:41.416763  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:41.416796  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:41.417070  653531 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0701 12:26:41.421392  653531 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 12:26:41.434549  653531 mustload.go:65] Loading cluster: ha-735960
	I0701 12:26:41.434797  653531 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:26:41.435079  653531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:26:41.435129  653531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:26:41.451156  653531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45677
	I0701 12:26:41.451676  653531 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:26:41.452212  653531 main.go:141] libmachine: Using API Version  1
	I0701 12:26:41.452237  653531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:26:41.452614  653531 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:26:41.452827  653531 main.go:141] libmachine: (ha-735960) Calling .GetState
	I0701 12:26:41.454575  653531 host.go:66] Checking if "ha-735960" exists ...
	I0701 12:26:41.454891  653531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:26:41.454938  653531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:26:41.471129  653531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33243
	I0701 12:26:41.471681  653531 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:26:41.472198  653531 main.go:141] libmachine: Using API Version  1
	I0701 12:26:41.472222  653531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:26:41.472612  653531 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:26:41.472844  653531 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:26:41.473032  653531 certs.go:68] Setting up /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960 for IP: 192.168.39.97
	I0701 12:26:41.473049  653531 certs.go:194] generating shared ca certs ...
	I0701 12:26:41.473074  653531 certs.go:226] acquiring lock for ca certs: {Name:mk34e166bfd069e523b2325e14d1812c523bff53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:26:41.473230  653531 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key
	I0701 12:26:41.473268  653531 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key
	I0701 12:26:41.473278  653531 certs.go:256] generating profile certs ...
	I0701 12:26:41.473349  653531 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/client.key
	I0701 12:26:41.473405  653531 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key.f1482ab5
	I0701 12:26:41.473453  653531 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.key
	I0701 12:26:41.473465  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0701 12:26:41.473478  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0701 12:26:41.473490  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0701 12:26:41.473503  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0701 12:26:41.473514  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0701 12:26:41.473528  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0701 12:26:41.473537  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0701 12:26:41.473548  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0701 12:26:41.473603  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem (1338 bytes)
	W0701 12:26:41.473630  653531 certs.go:480] ignoring /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854_empty.pem, impossibly tiny 0 bytes
	I0701 12:26:41.473639  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem (1675 bytes)
	I0701 12:26:41.473659  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem (1078 bytes)
	I0701 12:26:41.473680  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem (1123 bytes)
	I0701 12:26:41.473702  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem (1675 bytes)
	I0701 12:26:41.473736  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem (1708 bytes)
	I0701 12:26:41.473759  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem -> /usr/share/ca-certificates/637854.pem
	I0701 12:26:41.473772  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> /usr/share/ca-certificates/6378542.pem
	I0701 12:26:41.473784  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:26:41.494518  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:26:41.498371  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:26:41.498974  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:26:41.499011  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:26:41.499158  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:26:41.499416  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:26:41.499610  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:26:41.499835  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa Username:docker}
	I0701 12:26:41.570757  653531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0701 12:26:41.575932  653531 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0701 12:26:41.587511  653531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0701 12:26:41.591633  653531 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0701 12:26:41.604961  653531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0701 12:26:41.609152  653531 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0701 12:26:41.619653  653531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0701 12:26:41.623572  653531 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0701 12:26:41.634171  653531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0701 12:26:41.638176  653531 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0701 12:26:41.654120  653531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0701 12:26:41.659095  653531 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0701 12:26:41.671865  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0701 12:26:41.701740  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0701 12:26:41.726445  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0701 12:26:41.751925  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0701 12:26:41.776782  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0701 12:26:41.801611  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0701 12:26:41.825786  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0701 12:26:41.849992  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0701 12:26:41.873760  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem --> /usr/share/ca-certificates/637854.pem (1338 bytes)
	I0701 12:26:41.898685  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem --> /usr/share/ca-certificates/6378542.pem (1708 bytes)
	I0701 12:26:41.923397  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0701 12:26:41.948251  653531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0701 12:26:41.965919  653531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0701 12:26:41.982966  653531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0701 12:26:42.001626  653531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0701 12:26:42.019386  653531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0701 12:26:42.036382  653531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0701 12:26:42.053238  653531 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0701 12:26:42.070881  653531 ssh_runner.go:195] Run: openssl version
	I0701 12:26:42.076651  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/637854.pem && ln -fs /usr/share/ca-certificates/637854.pem /etc/ssl/certs/637854.pem"
	I0701 12:26:42.087389  653531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/637854.pem
	I0701 12:26:42.093055  653531 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  1 12:11 /usr/share/ca-certificates/637854.pem
	I0701 12:26:42.093154  653531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/637854.pem
	I0701 12:26:42.099823  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/637854.pem /etc/ssl/certs/51391683.0"
	I0701 12:26:42.111701  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6378542.pem && ln -fs /usr/share/ca-certificates/6378542.pem /etc/ssl/certs/6378542.pem"
	I0701 12:26:42.125593  653531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6378542.pem
	I0701 12:26:42.130163  653531 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  1 12:11 /usr/share/ca-certificates/6378542.pem
	I0701 12:26:42.130246  653531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6378542.pem
	I0701 12:26:42.136102  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6378542.pem /etc/ssl/certs/3ec20f2e.0"
	I0701 12:26:42.147064  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0701 12:26:42.159086  653531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:26:42.163767  653531 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  1 12:05 /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:26:42.163864  653531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:26:42.170462  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0701 12:26:42.181119  653531 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0701 12:26:42.185711  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0701 12:26:42.191736  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0701 12:26:42.198232  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0701 12:26:42.204698  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0701 12:26:42.210909  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0701 12:26:42.216837  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0701 12:26:42.222755  653531 kubeadm.go:928] updating node {m03 192.168.39.97 8443 v1.30.2 docker true true} ...
	I0701 12:26:42.222878  653531 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-735960-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.97
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-735960 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0701 12:26:42.222906  653531 kube-vip.go:115] generating kube-vip config ...
	I0701 12:26:42.222955  653531 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0701 12:26:42.237298  653531 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0701 12:26:42.237376  653531 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0701 12:26:42.237455  653531 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0701 12:26:42.247439  653531 binaries.go:44] Found k8s binaries, skipping transfer
	I0701 12:26:42.247515  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0701 12:26:42.257290  653531 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0701 12:26:42.274152  653531 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0701 12:26:42.290241  653531 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0701 12:26:42.308095  653531 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0701 12:26:42.312034  653531 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 12:26:42.325214  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:26:42.447612  653531 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0701 12:26:42.465983  653531 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:26:42.466298  653531 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:26:42.468248  653531 out.go:177] * Verifying Kubernetes components...
	I0701 12:26:42.469706  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:26:42.625060  653531 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0701 12:26:42.647149  653531 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19166-630650/kubeconfig
	I0701 12:26:42.647532  653531 kapi.go:59] client config for ha-735960: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/client.crt", KeyFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/client.key", CAFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfbb80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0701 12:26:42.647632  653531 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.16:8443
	I0701 12:26:42.647948  653531 node_ready.go:35] waiting up to 6m0s for node "ha-735960-m03" to be "Ready" ...
	I0701 12:26:42.648043  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:42.648055  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:42.648066  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:42.648079  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:42.652553  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:26:43.148887  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:43.148913  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:43.148924  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:43.148931  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:43.152504  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:43.153020  653531 node_ready.go:49] node "ha-735960-m03" has status "Ready":"True"
	I0701 12:26:43.153041  653531 node_ready.go:38] duration metric: took 505.070913ms for node "ha-735960-m03" to be "Ready" ...
	I0701 12:26:43.153051  653531 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 12:26:43.153132  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods
	I0701 12:26:43.153144  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:43.153154  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:43.153161  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:43.159789  653531 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0701 12:26:43.167076  653531 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:43.167158  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:26:43.167167  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:43.167175  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:43.167179  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:43.169757  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:43.170310  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:26:43.170347  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:43.170357  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:43.170362  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:43.173097  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:43.173879  653531 pod_ready.go:92] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"True"
	I0701 12:26:43.173897  653531 pod_ready.go:81] duration metric: took 6.79477ms for pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:43.173905  653531 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-p4rtz" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:43.173970  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-p4rtz
	I0701 12:26:43.173977  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:43.173984  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:43.173987  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:43.176719  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:43.177389  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:26:43.177403  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:43.177410  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:43.177415  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:43.180272  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:43.180876  653531 pod_ready.go:92] pod "coredns-7db6d8ff4d-p4rtz" in "kube-system" namespace has status "Ready":"True"
	I0701 12:26:43.180892  653531 pod_ready.go:81] duration metric: took 6.981686ms for pod "coredns-7db6d8ff4d-p4rtz" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:43.180901  653531 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:43.180946  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960
	I0701 12:26:43.180953  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:43.180959  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:43.180963  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:43.183979  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:43.184715  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:26:43.184733  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:43.184744  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:43.184750  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:43.187303  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:43.187727  653531 pod_ready.go:92] pod "etcd-ha-735960" in "kube-system" namespace has status "Ready":"True"
	I0701 12:26:43.187743  653531 pod_ready.go:81] duration metric: took 6.837753ms for pod "etcd-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:43.187751  653531 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:43.187803  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m02
	I0701 12:26:43.187810  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:43.187816  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:43.187820  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:43.190206  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:43.190728  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:26:43.190744  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:43.190753  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:43.190761  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:43.193433  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:43.194190  653531 pod_ready.go:92] pod "etcd-ha-735960-m02" in "kube-system" namespace has status "Ready":"True"
	I0701 12:26:43.194207  653531 pod_ready.go:81] duration metric: took 6.448739ms for pod "etcd-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:43.194216  653531 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:43.349638  653531 request.go:629] Waited for 155.349ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:43.349754  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:43.349767  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:43.349778  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:43.349790  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:43.354862  653531 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0701 12:26:43.548911  653531 request.go:629] Waited for 193.270032ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:43.548983  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:43.549014  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:43.549029  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:43.549034  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:43.554047  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:26:43.749322  653531 request.go:629] Waited for 54.224497ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:43.749397  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:43.749405  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:43.749423  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:43.749433  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:43.753610  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:26:43.949318  653531 request.go:629] Waited for 194.40537ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:43.949442  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:43.949455  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:43.949466  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:43.949475  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:43.953476  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:44.195013  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:44.195041  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:44.195053  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:44.195058  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:44.198623  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:44.349775  653531 request.go:629] Waited for 150.337133ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:44.349881  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:44.349890  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:44.349901  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:44.349909  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:44.354832  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:26:44.694539  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:44.694560  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:44.694569  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:44.694573  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:44.698072  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:44.749262  653531 request.go:629] Waited for 50.212385ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:44.749342  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:44.749357  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:44.749376  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:44.749400  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:44.759594  653531 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0701 12:26:45.194608  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:45.194639  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:45.194651  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:45.194656  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:45.198135  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:45.199157  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:45.199178  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:45.199187  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:45.199193  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:45.201747  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:45.202475  653531 pod_ready.go:102] pod "etcd-ha-735960-m03" in "kube-system" namespace has status "Ready":"False"
	I0701 12:26:45.695358  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:45.695387  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:45.695398  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:45.695405  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:45.698583  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:45.699570  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:45.699591  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:45.699603  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:45.699611  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:45.702299  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:46.195334  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:46.195357  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:46.195366  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:46.195369  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:46.199158  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:46.200116  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:46.200134  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:46.200146  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:46.200153  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:46.203740  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:46.695210  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:46.695238  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:46.695250  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:46.695257  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:46.698972  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:46.699688  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:46.699709  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:46.699722  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:46.699728  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:46.703576  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:47.194463  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:47.194494  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:47.194504  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:47.194512  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:47.197423  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:47.198125  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:47.198144  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:47.198156  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:47.198166  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:47.201172  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:47.695417  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:47.695446  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:47.695457  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:47.695463  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:47.698528  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:47.699400  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:47.699424  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:47.699435  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:47.699440  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:47.702619  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:47.703202  653531 pod_ready.go:102] pod "etcd-ha-735960-m03" in "kube-system" namespace has status "Ready":"False"
	I0701 12:26:48.194609  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:48.194632  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:48.194640  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:48.194656  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:48.197877  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:48.198784  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:48.198804  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:48.198815  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:48.198819  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:48.201611  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:48.694433  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:48.694459  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:48.694471  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:48.694478  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:48.697539  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:48.698170  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:48.698185  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:48.698193  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:48.698196  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:48.700886  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:49.194905  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:49.194931  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:49.194942  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:49.194954  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:49.199572  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:26:49.200541  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:49.200560  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:49.200570  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:49.200575  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:49.204090  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:49.694531  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:49.694551  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:49.694559  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:49.694563  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:49.698105  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:49.699044  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:49.699062  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:49.699073  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:49.699078  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:49.701617  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:50.195294  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:50.195322  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:50.195333  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:50.195338  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:50.198820  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:50.199561  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:50.199579  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:50.199588  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:50.199594  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:50.202455  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:50.203029  653531 pod_ready.go:102] pod "etcd-ha-735960-m03" in "kube-system" namespace has status "Ready":"False"
	I0701 12:26:50.694678  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:50.694700  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:50.694708  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:50.694712  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:50.697694  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:50.698383  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:50.698401  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:50.698409  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:50.698413  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:50.701398  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:51.195484  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:51.195522  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:51.195535  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:51.195539  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:51.199113  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:51.199788  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:51.199804  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:51.199811  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:51.199815  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:51.202679  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:51.695276  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:51.695304  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:51.695318  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:51.695325  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:51.698725  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:51.699425  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:51.699444  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:51.699454  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:51.699461  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:51.702960  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:52.195136  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:52.195168  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:52.195178  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:52.195182  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:52.198421  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:52.199068  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:52.199081  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:52.199089  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:52.199133  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:52.201737  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:52.695128  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:52.695153  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:52.695161  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:52.695165  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:52.698791  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:52.699625  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:52.699640  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:52.699647  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:52.699666  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:52.702284  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:52.702827  653531 pod_ready.go:102] pod "etcd-ha-735960-m03" in "kube-system" namespace has status "Ready":"False"
	I0701 12:26:53.194518  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:53.194542  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:53.194550  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:53.194555  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:53.197969  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:53.198583  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:53.198602  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:53.198610  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:53.198615  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:53.201376  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:53.695296  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:53.695318  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:53.695326  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:53.695331  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:53.699078  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:53.699884  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:53.699910  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:53.699922  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:53.699929  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:53.703186  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:54.195014  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:54.195043  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:54.195054  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:54.195058  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:54.199057  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:54.199733  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:54.199750  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:54.199758  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:54.199763  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:54.202961  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:54.695177  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:54.695212  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:54.695225  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:54.695233  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:54.698371  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:54.699201  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:54.699216  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:54.699224  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:54.699227  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:54.702002  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:55.194543  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:55.194566  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:55.194574  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:55.194579  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:55.198201  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:55.198814  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:55.198832  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:55.198839  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:55.198843  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:55.201469  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:55.201993  653531 pod_ready.go:102] pod "etcd-ha-735960-m03" in "kube-system" namespace has status "Ready":"False"
	I0701 12:26:55.694950  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:55.694972  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:55.694983  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:55.694990  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:55.698498  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:55.699087  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:55.699101  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:55.699108  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:55.699112  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:55.701817  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:56.194521  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:56.194544  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:56.194552  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:56.194557  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:56.197837  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:56.198482  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:56.198499  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:56.198505  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:56.198509  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:56.201147  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:56.201653  653531 pod_ready.go:92] pod "etcd-ha-735960-m03" in "kube-system" namespace has status "Ready":"True"
	I0701 12:26:56.201674  653531 pod_ready.go:81] duration metric: took 13.007452083s for pod "etcd-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:56.201692  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:56.201750  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960
	I0701 12:26:56.201757  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:56.201764  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:56.201770  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:56.204418  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:56.205132  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:26:56.205148  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:56.205154  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:56.205158  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:56.207485  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:56.207887  653531 pod_ready.go:92] pod "kube-apiserver-ha-735960" in "kube-system" namespace has status "Ready":"True"
	I0701 12:26:56.207907  653531 pod_ready.go:81] duration metric: took 6.206212ms for pod "kube-apiserver-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:56.207916  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:56.207971  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m02
	I0701 12:26:56.207981  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:56.207988  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:56.207992  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:56.210274  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:56.210769  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:26:56.210784  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:56.210791  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:56.210795  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:56.213307  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:56.213730  653531 pod_ready.go:92] pod "kube-apiserver-ha-735960-m02" in "kube-system" namespace has status "Ready":"True"
	I0701 12:26:56.213745  653531 pod_ready.go:81] duration metric: took 5.823695ms for pod "kube-apiserver-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:56.213752  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:56.213799  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:26:56.213806  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:56.213813  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:56.213817  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:56.221893  653531 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0701 12:26:56.222630  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:56.222650  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:56.222661  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:56.222665  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:56.225298  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:56.714434  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:26:56.714457  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:56.714466  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:56.714473  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:56.717715  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:56.718387  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:56.718404  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:56.718414  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:56.718420  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:56.721172  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:57.213955  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:26:57.213979  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:57.213987  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:57.213992  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:57.217394  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:57.218050  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:57.218071  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:57.218082  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:57.218088  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:57.221478  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:57.714757  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:26:57.714779  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:57.714787  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:57.714792  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:57.717911  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:57.718695  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:57.718720  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:57.718734  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:57.718740  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:57.721551  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:58.214582  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:26:58.214605  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:58.214613  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:58.214616  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:58.218396  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:58.219147  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:58.219167  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:58.219174  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:58.219178  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:58.221830  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:58.222386  653531 pod_ready.go:102] pod "kube-apiserver-ha-735960-m03" in "kube-system" namespace has status "Ready":"False"
	I0701 12:26:58.714864  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:26:58.714890  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:58.714901  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:58.714906  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:58.718181  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:58.718855  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:58.718874  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:58.718881  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:58.718885  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:58.722484  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:59.214439  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:26:59.214472  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:59.214484  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:59.214491  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:59.217758  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:59.218712  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:59.218732  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:59.218738  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:59.218742  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:59.221527  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:59.713995  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:26:59.714020  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:59.714028  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:59.714033  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:59.717121  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:59.717838  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:59.717855  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:59.717862  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:59.717866  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:59.720568  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:27:00.214542  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:27:00.214568  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:00.214578  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:00.214583  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:00.218220  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:00.218919  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:00.218938  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:00.218947  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:00.218954  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:00.222119  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:00.223039  653531 pod_ready.go:102] pod "kube-apiserver-ha-735960-m03" in "kube-system" namespace has status "Ready":"False"
	I0701 12:27:00.714993  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:27:00.715015  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:00.715023  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:00.715027  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:00.718022  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:27:00.718871  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:00.718894  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:00.718905  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:00.718910  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:00.721660  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:27:01.214293  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:27:01.214320  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:01.214345  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:01.214354  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:01.217660  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:01.218619  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:01.218636  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:01.218645  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:01.218649  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:01.221248  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:27:01.714569  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:27:01.714593  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:01.714602  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:01.714607  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:01.717986  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:01.718877  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:01.718900  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:01.718912  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:01.718917  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:01.722103  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:02.213928  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:27:02.213953  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:02.213961  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:02.213965  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:02.217318  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:02.218078  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:02.218093  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:02.218099  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:02.218102  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:02.221493  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:02.714825  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:27:02.714849  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:02.714857  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:02.714862  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:02.718359  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:02.719162  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:02.719180  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:02.719188  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:02.719193  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:02.722363  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:02.723005  653531 pod_ready.go:92] pod "kube-apiserver-ha-735960-m03" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:02.723029  653531 pod_ready.go:81] duration metric: took 6.509269845s for pod "kube-apiserver-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:02.723044  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:02.723152  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-735960
	I0701 12:27:02.723163  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:02.723174  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:02.723186  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:02.726502  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:02.727250  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:27:02.727266  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:02.727277  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:02.727280  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:02.730522  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:02.731090  653531 pod_ready.go:92] pod "kube-controller-manager-ha-735960" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:02.731116  653531 pod_ready.go:81] duration metric: took 8.062099ms for pod "kube-controller-manager-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:02.731129  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:02.731206  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-735960-m02
	I0701 12:27:02.731216  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:02.731226  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:02.731232  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:02.734354  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:02.735350  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:27:02.735370  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:02.735378  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:02.735381  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:02.738250  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:27:02.739014  653531 pod_ready.go:92] pod "kube-controller-manager-ha-735960-m02" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:02.739035  653531 pod_ready.go:81] duration metric: took 7.898052ms for pod "kube-controller-manager-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:02.739045  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:02.739108  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-735960-m03
	I0701 12:27:02.739116  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:02.739125  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:02.739134  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:02.742376  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:02.743084  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:02.743106  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:02.743117  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:02.743121  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:02.746455  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:02.747046  653531 pod_ready.go:92] pod "kube-controller-manager-ha-735960-m03" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:02.747075  653531 pod_ready.go:81] duration metric: took 8.017741ms for pod "kube-controller-manager-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:02.747091  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-25ssf" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:02.747213  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-25ssf
	I0701 12:27:02.747226  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:02.747237  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:02.747242  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:02.750009  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:27:02.750887  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m04
	I0701 12:27:02.750910  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:02.750941  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:02.750947  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:02.753841  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:27:02.754410  653531 pod_ready.go:97] node "ha-735960-m04" hosting pod "kube-proxy-25ssf" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-735960-m04" has status "Ready":"Unknown"
	I0701 12:27:02.754439  653531 pod_ready.go:81] duration metric: took 7.336267ms for pod "kube-proxy-25ssf" in "kube-system" namespace to be "Ready" ...
	E0701 12:27:02.754453  653531 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-735960-m04" hosting pod "kube-proxy-25ssf" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-735960-m04" has status "Ready":"Unknown"
	I0701 12:27:02.754464  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-776rt" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:02.915931  653531 request.go:629] Waited for 161.334912ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-776rt
	I0701 12:27:02.916009  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-776rt
	I0701 12:27:02.916016  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:02.916026  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:02.916032  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:02.922578  653531 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0701 12:27:03.115563  653531 request.go:629] Waited for 192.243271ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:03.115665  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:03.115679  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:03.115693  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:03.115702  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:03.119673  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:03.120379  653531 pod_ready.go:92] pod "kube-proxy-776rt" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:03.120399  653531 pod_ready.go:81] duration metric: took 365.926734ms for pod "kube-proxy-776rt" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:03.120409  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b6knb" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:03.315515  653531 request.go:629] Waited for 195.003147ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b6knb
	I0701 12:27:03.315575  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b6knb
	I0701 12:27:03.315580  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:03.315588  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:03.315593  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:03.319367  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:03.515329  653531 request.go:629] Waited for 195.408895ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:27:03.515421  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:27:03.515429  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:03.515440  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:03.515452  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:03.518825  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:03.519611  653531 pod_ready.go:92] pod "kube-proxy-b6knb" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:03.519633  653531 pod_ready.go:81] duration metric: took 399.213433ms for pod "kube-proxy-b6knb" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:03.519642  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lphzn" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:03.715721  653531 request.go:629] Waited for 195.977677ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lphzn
	I0701 12:27:03.715811  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lphzn
	I0701 12:27:03.715820  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:03.715828  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:03.715833  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:03.720058  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:27:03.915338  653531 request.go:629] Waited for 194.486914ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:27:03.915438  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:27:03.915447  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:03.915455  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:03.915462  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:03.919143  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:03.919765  653531 pod_ready.go:92] pod "kube-proxy-lphzn" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:03.919789  653531 pod_ready.go:81] duration metric: took 400.14123ms for pod "kube-proxy-lphzn" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:03.919800  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:04.114907  653531 request.go:629] Waited for 195.032639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960
	I0701 12:27:04.114983  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960
	I0701 12:27:04.115004  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:04.115019  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:04.115027  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:04.119283  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:27:04.315128  653531 request.go:629] Waited for 195.065236ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:27:04.315231  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:27:04.315243  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:04.315255  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:04.315264  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:04.319107  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:04.319792  653531 pod_ready.go:92] pod "kube-scheduler-ha-735960" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:04.319821  653531 pod_ready.go:81] duration metric: took 400.011957ms for pod "kube-scheduler-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:04.319838  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:04.515786  653531 request.go:629] Waited for 195.848501ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960-m02
	I0701 12:27:04.515865  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960-m02
	I0701 12:27:04.515872  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:04.515885  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:04.515894  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:04.519607  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:04.715555  653531 request.go:629] Waited for 195.254305ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:27:04.715662  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:27:04.715673  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:04.715686  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:04.715696  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:04.718989  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:04.719533  653531 pod_ready.go:92] pod "kube-scheduler-ha-735960-m02" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:04.719555  653531 pod_ready.go:81] duration metric: took 399.709368ms for pod "kube-scheduler-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:04.719565  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:04.915742  653531 request.go:629] Waited for 196.076319ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960-m03
	I0701 12:27:04.915873  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960-m03
	I0701 12:27:04.915884  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:04.915892  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:04.915896  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:04.919910  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:05.114903  653531 request.go:629] Waited for 194.321141ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:05.114998  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:05.115010  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:05.115020  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:05.115029  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:05.118835  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:05.119325  653531 pod_ready.go:92] pod "kube-scheduler-ha-735960-m03" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:05.119348  653531 pod_ready.go:81] duration metric: took 399.776156ms for pod "kube-scheduler-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:05.119360  653531 pod_ready.go:38] duration metric: took 21.966297492s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 12:27:05.119380  653531 api_server.go:52] waiting for apiserver process to appear ...
	I0701 12:27:05.119446  653531 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 12:27:05.134970  653531 api_server.go:72] duration metric: took 22.668924734s to wait for apiserver process to appear ...
	I0701 12:27:05.135005  653531 api_server.go:88] waiting for apiserver healthz status ...
	I0701 12:27:05.135037  653531 api_server.go:253] Checking apiserver healthz at https://192.168.39.16:8443/healthz ...
	I0701 12:27:05.139924  653531 api_server.go:279] https://192.168.39.16:8443/healthz returned 200:
	ok
	I0701 12:27:05.140029  653531 round_trippers.go:463] GET https://192.168.39.16:8443/version
	I0701 12:27:05.140040  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:05.140052  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:05.140060  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:05.141045  653531 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0701 12:27:05.141124  653531 api_server.go:141] control plane version: v1.30.2
	I0701 12:27:05.141142  653531 api_server.go:131] duration metric: took 6.129152ms to wait for apiserver health ...
	I0701 12:27:05.141156  653531 system_pods.go:43] waiting for kube-system pods to appear ...
	I0701 12:27:05.315496  653531 request.go:629] Waited for 174.257848ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods
	I0701 12:27:05.315603  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods
	I0701 12:27:05.315615  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:05.315627  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:05.315640  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:05.331176  653531 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0701 12:27:05.341126  653531 system_pods.go:59] 26 kube-system pods found
	I0701 12:27:05.341168  653531 system_pods.go:61] "coredns-7db6d8ff4d-nk4lf" [c03dd635-a82d-4f18-bd72-ec575f91867e] Running
	I0701 12:27:05.341173  653531 system_pods.go:61] "coredns-7db6d8ff4d-p4rtz" [267efba7-bf34-48d5-ab15-5bda45ff2f4f] Running
	I0701 12:27:05.341177  653531 system_pods.go:61] "etcd-ha-735960" [4b98745c-292f-42b5-977c-69c50fd241f1] Running
	I0701 12:27:05.341181  653531 system_pods.go:61] "etcd-ha-735960-m02" [fed8cdfa-8428-47e0-84ef-05297ad232f8] Running
	I0701 12:27:05.341184  653531 system_pods.go:61] "etcd-ha-735960-m03" [50b07bc3-ff6b-487d-8654-901d96892868] Running
	I0701 12:27:05.341187  653531 system_pods.go:61] "kindnet-2424m" [aa18d5dd-f6eb-4f04-a61e-b0b257e214af] Running
	I0701 12:27:05.341190  653531 system_pods.go:61] "kindnet-6gx8s" [7f46a773-a075-476c-9e54-89f125b4b57a] Running
	I0701 12:27:05.341195  653531 system_pods.go:61] "kindnet-7f6hm" [a8c302b4-1163-4d4f-bfe3-4fd3b5d23cf0] Running
	I0701 12:27:05.341199  653531 system_pods.go:61] "kindnet-bztzv" [7afa0e45-3d10-40bc-b422-7005a3ca9d3a] Running
	I0701 12:27:05.341203  653531 system_pods.go:61] "kube-apiserver-ha-735960" [ad041aaa-465a-4d8a-a8dc-b7665e1d587d] Running
	I0701 12:27:05.341208  653531 system_pods.go:61] "kube-apiserver-ha-735960-m02" [ba28f48e-1c18-47e3-ab11-a9b5588c5c32] Running
	I0701 12:27:05.341213  653531 system_pods.go:61] "kube-apiserver-ha-735960-m03" [baafa3bf-78ee-4269-9591-b0440927e055] Running
	I0701 12:27:05.341218  653531 system_pods.go:61] "kube-controller-manager-ha-735960" [3f0f0cf5-329d-47bc-b922-7583902e2607] Running
	I0701 12:27:05.341222  653531 system_pods.go:61] "kube-controller-manager-ha-735960-m02" [258fde18-ac5c-4446-842b-9465529b154c] Running
	I0701 12:27:05.341231  653531 system_pods.go:61] "kube-controller-manager-ha-735960-m03" [79acc56b-a9e0-4d4b-bc64-1a3a36ddf051] Running
	I0701 12:27:05.341235  653531 system_pods.go:61] "kube-proxy-25ssf" [11f0dc23-ab9d-4d39-988d-4c44dfde86cd] Running
	I0701 12:27:05.341244  653531 system_pods.go:61] "kube-proxy-776rt" [5666dac7-924e-4429-bd1d-a1a5647cc611] Running
	I0701 12:27:05.341248  653531 system_pods.go:61] "kube-proxy-b6knb" [eb36e930-5799-4ff7-821a-ccb22303cd1b] Running
	I0701 12:27:05.341253  653531 system_pods.go:61] "kube-proxy-lphzn" [0761a7a6-740e-4cde-9ab5-e02e8d417907] Running
	I0701 12:27:05.341258  653531 system_pods.go:61] "kube-scheduler-ha-735960" [c624cf42-a7d6-4aaf-859d-1aeaf29f9acb] Running
	I0701 12:27:05.341266  653531 system_pods.go:61] "kube-scheduler-ha-735960-m02" [7de78af7-2d79-46dc-bd34-f221d79fde06] Running
	I0701 12:27:05.341276  653531 system_pods.go:61] "kube-scheduler-ha-735960-m03" [9f9a2030-9332-44af-b8dc-3b4609e53f91] Running
	I0701 12:27:05.341284  653531 system_pods.go:61] "kube-vip-ha-735960" [4299679a-c145-4f4f-8ec6-3cd468b98ef1] Running
	I0701 12:27:05.341289  653531 system_pods.go:61] "kube-vip-ha-735960-m02" [1c9b13e1-515c-43c0-8d99-5ad1c1807727] Running
	I0701 12:27:05.341296  653531 system_pods.go:61] "kube-vip-ha-735960-m03" [7069ea7c-5461-4fe6-a969-97fe33396ebb] Running
	I0701 12:27:05.341300  653531 system_pods.go:61] "storage-provisioner" [f5c4f7f9-d648-4019-a5ea-6ce59f6c5663] Running
	I0701 12:27:05.341308  653531 system_pods.go:74] duration metric: took 200.142768ms to wait for pod list to return data ...
	I0701 12:27:05.341319  653531 default_sa.go:34] waiting for default service account to be created ...
	I0701 12:27:05.515805  653531 request.go:629] Waited for 174.38988ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/default/serviceaccounts
	I0701 12:27:05.515869  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/default/serviceaccounts
	I0701 12:27:05.515874  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:05.515882  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:05.515886  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:05.519545  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:05.519680  653531 default_sa.go:45] found service account: "default"
	I0701 12:27:05.519701  653531 default_sa.go:55] duration metric: took 178.373792ms for default service account to be created ...
	I0701 12:27:05.519712  653531 system_pods.go:116] waiting for k8s-apps to be running ...
	I0701 12:27:05.715337  653531 request.go:629] Waited for 195.548539ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods
	I0701 12:27:05.715405  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods
	I0701 12:27:05.715411  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:05.715423  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:05.715431  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:05.722571  653531 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0701 12:27:05.729587  653531 system_pods.go:86] 26 kube-system pods found
	I0701 12:27:05.729628  653531 system_pods.go:89] "coredns-7db6d8ff4d-nk4lf" [c03dd635-a82d-4f18-bd72-ec575f91867e] Running
	I0701 12:27:05.729636  653531 system_pods.go:89] "coredns-7db6d8ff4d-p4rtz" [267efba7-bf34-48d5-ab15-5bda45ff2f4f] Running
	I0701 12:27:05.729642  653531 system_pods.go:89] "etcd-ha-735960" [4b98745c-292f-42b5-977c-69c50fd241f1] Running
	I0701 12:27:05.729649  653531 system_pods.go:89] "etcd-ha-735960-m02" [fed8cdfa-8428-47e0-84ef-05297ad232f8] Running
	I0701 12:27:05.729655  653531 system_pods.go:89] "etcd-ha-735960-m03" [50b07bc3-ff6b-487d-8654-901d96892868] Running
	I0701 12:27:05.729661  653531 system_pods.go:89] "kindnet-2424m" [aa18d5dd-f6eb-4f04-a61e-b0b257e214af] Running
	I0701 12:27:05.729666  653531 system_pods.go:89] "kindnet-6gx8s" [7f46a773-a075-476c-9e54-89f125b4b57a] Running
	I0701 12:27:05.729671  653531 system_pods.go:89] "kindnet-7f6hm" [a8c302b4-1163-4d4f-bfe3-4fd3b5d23cf0] Running
	I0701 12:27:05.729677  653531 system_pods.go:89] "kindnet-bztzv" [7afa0e45-3d10-40bc-b422-7005a3ca9d3a] Running
	I0701 12:27:05.729684  653531 system_pods.go:89] "kube-apiserver-ha-735960" [ad041aaa-465a-4d8a-a8dc-b7665e1d587d] Running
	I0701 12:27:05.729689  653531 system_pods.go:89] "kube-apiserver-ha-735960-m02" [ba28f48e-1c18-47e3-ab11-a9b5588c5c32] Running
	I0701 12:27:05.729695  653531 system_pods.go:89] "kube-apiserver-ha-735960-m03" [baafa3bf-78ee-4269-9591-b0440927e055] Running
	I0701 12:27:05.729702  653531 system_pods.go:89] "kube-controller-manager-ha-735960" [3f0f0cf5-329d-47bc-b922-7583902e2607] Running
	I0701 12:27:05.729710  653531 system_pods.go:89] "kube-controller-manager-ha-735960-m02" [258fde18-ac5c-4446-842b-9465529b154c] Running
	I0701 12:27:05.729720  653531 system_pods.go:89] "kube-controller-manager-ha-735960-m03" [79acc56b-a9e0-4d4b-bc64-1a3a36ddf051] Running
	I0701 12:27:05.729729  653531 system_pods.go:89] "kube-proxy-25ssf" [11f0dc23-ab9d-4d39-988d-4c44dfde86cd] Running
	I0701 12:27:05.729737  653531 system_pods.go:89] "kube-proxy-776rt" [5666dac7-924e-4429-bd1d-a1a5647cc611] Running
	I0701 12:27:05.729745  653531 system_pods.go:89] "kube-proxy-b6knb" [eb36e930-5799-4ff7-821a-ccb22303cd1b] Running
	I0701 12:27:05.729755  653531 system_pods.go:89] "kube-proxy-lphzn" [0761a7a6-740e-4cde-9ab5-e02e8d417907] Running
	I0701 12:27:05.729764  653531 system_pods.go:89] "kube-scheduler-ha-735960" [c624cf42-a7d6-4aaf-859d-1aeaf29f9acb] Running
	I0701 12:27:05.729770  653531 system_pods.go:89] "kube-scheduler-ha-735960-m02" [7de78af7-2d79-46dc-bd34-f221d79fde06] Running
	I0701 12:27:05.729776  653531 system_pods.go:89] "kube-scheduler-ha-735960-m03" [9f9a2030-9332-44af-b8dc-3b4609e53f91] Running
	I0701 12:27:05.729783  653531 system_pods.go:89] "kube-vip-ha-735960" [4299679a-c145-4f4f-8ec6-3cd468b98ef1] Running
	I0701 12:27:05.729789  653531 system_pods.go:89] "kube-vip-ha-735960-m02" [1c9b13e1-515c-43c0-8d99-5ad1c1807727] Running
	I0701 12:27:05.729796  653531 system_pods.go:89] "kube-vip-ha-735960-m03" [7069ea7c-5461-4fe6-a969-97fe33396ebb] Running
	I0701 12:27:05.729802  653531 system_pods.go:89] "storage-provisioner" [f5c4f7f9-d648-4019-a5ea-6ce59f6c5663] Running
	I0701 12:27:05.729815  653531 system_pods.go:126] duration metric: took 210.095212ms to wait for k8s-apps to be running ...
	I0701 12:27:05.729829  653531 system_svc.go:44] waiting for kubelet service to be running ....
	I0701 12:27:05.729888  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 12:27:05.745646  653531 system_svc.go:56] duration metric: took 15.808828ms WaitForService to wait for kubelet
	I0701 12:27:05.745679  653531 kubeadm.go:576] duration metric: took 23.279640822s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 12:27:05.745702  653531 node_conditions.go:102] verifying NodePressure condition ...
	I0701 12:27:05.915161  653531 request.go:629] Waited for 169.354932ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes
	I0701 12:27:05.915221  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes
	I0701 12:27:05.915226  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:05.915234  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:05.915239  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:05.919105  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:05.920307  653531 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0701 12:27:05.920336  653531 node_conditions.go:123] node cpu capacity is 2
	I0701 12:27:05.920352  653531 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0701 12:27:05.920357  653531 node_conditions.go:123] node cpu capacity is 2
	I0701 12:27:05.920361  653531 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0701 12:27:05.920366  653531 node_conditions.go:123] node cpu capacity is 2
	I0701 12:27:05.920370  653531 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0701 12:27:05.920375  653531 node_conditions.go:123] node cpu capacity is 2
	I0701 12:27:05.920382  653531 node_conditions.go:105] duration metric: took 174.672945ms to run NodePressure ...
	I0701 12:27:05.920400  653531 start.go:240] waiting for startup goroutines ...
	I0701 12:27:05.920438  653531 start.go:254] writing updated cluster config ...
	I0701 12:27:05.922556  653531 out.go:177] 
	I0701 12:27:05.924320  653531 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:27:05.924444  653531 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
	I0701 12:27:05.926228  653531 out.go:177] * Starting "ha-735960-m04" worker node in "ha-735960" cluster
	I0701 12:27:05.927583  653531 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 12:27:05.927623  653531 cache.go:56] Caching tarball of preloaded images
	I0701 12:27:05.927740  653531 preload.go:173] Found /home/jenkins/minikube-integration/19166-630650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0701 12:27:05.927753  653531 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0701 12:27:05.927868  653531 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
	I0701 12:27:05.928081  653531 start.go:360] acquireMachinesLock for ha-735960-m04: {Name:mk43a6c0c0c15a237623df377ad65b82186c0ad8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:27:05.928138  653531 start.go:364] duration metric: took 34.293µs to acquireMachinesLock for "ha-735960-m04"
	I0701 12:27:05.928160  653531 start.go:96] Skipping create...Using existing machine configuration
	I0701 12:27:05.928170  653531 fix.go:54] fixHost starting: m04
	I0701 12:27:05.928452  653531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:27:05.928496  653531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:27:05.944734  653531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39337
	I0701 12:27:05.945306  653531 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:27:05.945856  653531 main.go:141] libmachine: Using API Version  1
	I0701 12:27:05.945878  653531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:27:05.946270  653531 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:27:05.946505  653531 main.go:141] libmachine: (ha-735960-m04) Calling .DriverName
	I0701 12:27:05.946718  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetState
	I0701 12:27:05.948900  653531 fix.go:112] recreateIfNeeded on ha-735960-m04: state=Stopped err=<nil>
	I0701 12:27:05.948936  653531 main.go:141] libmachine: (ha-735960-m04) Calling .DriverName
	W0701 12:27:05.949137  653531 fix.go:138] unexpected machine state, will restart: <nil>
	I0701 12:27:05.951007  653531 out.go:177] * Restarting existing kvm2 VM for "ha-735960-m04" ...
	I0701 12:27:05.952219  653531 main.go:141] libmachine: (ha-735960-m04) Calling .Start
	I0701 12:27:05.952428  653531 main.go:141] libmachine: (ha-735960-m04) Ensuring networks are active...
	I0701 12:27:05.953378  653531 main.go:141] libmachine: (ha-735960-m04) Ensuring network default is active
	I0701 12:27:05.953815  653531 main.go:141] libmachine: (ha-735960-m04) Ensuring network mk-ha-735960 is active
	I0701 12:27:05.954229  653531 main.go:141] libmachine: (ha-735960-m04) Getting domain xml...
	I0701 12:27:05.954857  653531 main.go:141] libmachine: (ha-735960-m04) Creating domain...
	I0701 12:27:07.274791  653531 main.go:141] libmachine: (ha-735960-m04) Waiting to get IP...
	I0701 12:27:07.275684  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:07.276224  653531 main.go:141] libmachine: (ha-735960-m04) DBG | unable to find current IP address of domain ha-735960-m04 in network mk-ha-735960
	I0701 12:27:07.276269  653531 main.go:141] libmachine: (ha-735960-m04) DBG | I0701 12:27:07.276176  654403 retry.go:31] will retry after 236.931472ms: waiting for machine to come up
	I0701 12:27:07.514910  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:07.515487  653531 main.go:141] libmachine: (ha-735960-m04) DBG | unable to find current IP address of domain ha-735960-m04 in network mk-ha-735960
	I0701 12:27:07.515520  653531 main.go:141] libmachine: (ha-735960-m04) DBG | I0701 12:27:07.515422  654403 retry.go:31] will retry after 376.766943ms: waiting for machine to come up
	I0701 12:27:07.894235  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:07.894716  653531 main.go:141] libmachine: (ha-735960-m04) DBG | unable to find current IP address of domain ha-735960-m04 in network mk-ha-735960
	I0701 12:27:07.894748  653531 main.go:141] libmachine: (ha-735960-m04) DBG | I0701 12:27:07.894658  654403 retry.go:31] will retry after 389.939732ms: waiting for machine to come up
	I0701 12:27:08.286528  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:08.287041  653531 main.go:141] libmachine: (ha-735960-m04) DBG | unable to find current IP address of domain ha-735960-m04 in network mk-ha-735960
	I0701 12:27:08.287066  653531 main.go:141] libmachine: (ha-735960-m04) DBG | I0701 12:27:08.286982  654403 retry.go:31] will retry after 542.184171ms: waiting for machine to come up
	I0701 12:27:08.831459  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:08.832024  653531 main.go:141] libmachine: (ha-735960-m04) DBG | unable to find current IP address of domain ha-735960-m04 in network mk-ha-735960
	I0701 12:27:08.832105  653531 main.go:141] libmachine: (ha-735960-m04) DBG | I0701 12:27:08.832069  654403 retry.go:31] will retry after 609.488369ms: waiting for machine to come up
	I0701 12:27:09.442798  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:09.443236  653531 main.go:141] libmachine: (ha-735960-m04) DBG | unable to find current IP address of domain ha-735960-m04 in network mk-ha-735960
	I0701 12:27:09.443272  653531 main.go:141] libmachine: (ha-735960-m04) DBG | I0701 12:27:09.443174  654403 retry.go:31] will retry after 777.604605ms: waiting for machine to come up
	I0701 12:27:10.221860  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:10.222317  653531 main.go:141] libmachine: (ha-735960-m04) DBG | unable to find current IP address of domain ha-735960-m04 in network mk-ha-735960
	I0701 12:27:10.222352  653531 main.go:141] libmachine: (ha-735960-m04) DBG | I0701 12:27:10.222242  654403 retry.go:31] will retry after 1.013463977s: waiting for machine to come up
	I0701 12:27:11.237171  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:11.237628  653531 main.go:141] libmachine: (ha-735960-m04) DBG | unable to find current IP address of domain ha-735960-m04 in network mk-ha-735960
	I0701 12:27:11.237658  653531 main.go:141] libmachine: (ha-735960-m04) DBG | I0701 12:27:11.237572  654403 retry.go:31] will retry after 1.368493369s: waiting for machine to come up
	I0701 12:27:12.607736  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:12.608308  653531 main.go:141] libmachine: (ha-735960-m04) DBG | unable to find current IP address of domain ha-735960-m04 in network mk-ha-735960
	I0701 12:27:12.608342  653531 main.go:141] libmachine: (ha-735960-m04) DBG | I0701 12:27:12.608254  654403 retry.go:31] will retry after 1.709127759s: waiting for machine to come up
	I0701 12:27:14.320033  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:14.320531  653531 main.go:141] libmachine: (ha-735960-m04) DBG | unable to find current IP address of domain ha-735960-m04 in network mk-ha-735960
	I0701 12:27:14.320565  653531 main.go:141] libmachine: (ha-735960-m04) DBG | I0701 12:27:14.320491  654403 retry.go:31] will retry after 2.145058749s: waiting for machine to come up
	I0701 12:27:16.466840  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:16.467246  653531 main.go:141] libmachine: (ha-735960-m04) DBG | unable to find current IP address of domain ha-735960-m04 in network mk-ha-735960
	I0701 12:27:16.467275  653531 main.go:141] libmachine: (ha-735960-m04) DBG | I0701 12:27:16.467196  654403 retry.go:31] will retry after 2.340416682s: waiting for machine to come up
	I0701 12:27:18.809756  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:18.810215  653531 main.go:141] libmachine: (ha-735960-m04) DBG | unable to find current IP address of domain ha-735960-m04 in network mk-ha-735960
	I0701 12:27:18.810245  653531 main.go:141] libmachine: (ha-735960-m04) DBG | I0701 12:27:18.810155  654403 retry.go:31] will retry after 2.893605535s: waiting for machine to come up
	I0701 12:27:21.705535  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:21.706011  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has current primary IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:21.706036  653531 main.go:141] libmachine: (ha-735960-m04) Found IP for machine: 192.168.39.60
	I0701 12:27:21.706050  653531 main.go:141] libmachine: (ha-735960-m04) Reserving static IP address...
	I0701 12:27:21.706638  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "ha-735960-m04", mac: "52:54:00:2d:8e:6d", ip: "192.168.39.60"} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:21.706671  653531 main.go:141] libmachine: (ha-735960-m04) Reserved static IP address: 192.168.39.60
	I0701 12:27:21.706689  653531 main.go:141] libmachine: (ha-735960-m04) DBG | skip adding static IP to network mk-ha-735960 - found existing host DHCP lease matching {name: "ha-735960-m04", mac: "52:54:00:2d:8e:6d", ip: "192.168.39.60"}
	I0701 12:27:21.706703  653531 main.go:141] libmachine: (ha-735960-m04) DBG | Getting to WaitForSSH function...
	I0701 12:27:21.706715  653531 main.go:141] libmachine: (ha-735960-m04) Waiting for SSH to be available...
	I0701 12:27:21.709236  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:21.709702  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:21.709729  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:21.709818  653531 main.go:141] libmachine: (ha-735960-m04) DBG | Using SSH client type: external
	I0701 12:27:21.709841  653531 main.go:141] libmachine: (ha-735960-m04) DBG | Using SSH private key: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m04/id_rsa (-rw-------)
	I0701 12:27:21.709870  653531 main.go:141] libmachine: (ha-735960-m04) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.60 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m04/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0701 12:27:21.709885  653531 main.go:141] libmachine: (ha-735960-m04) DBG | About to run SSH command:
	I0701 12:27:21.709897  653531 main.go:141] libmachine: (ha-735960-m04) DBG | exit 0
	I0701 12:27:21.838462  653531 main.go:141] libmachine: (ha-735960-m04) DBG | SSH cmd err, output: <nil>: 
	I0701 12:27:21.838803  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetConfigRaw
	I0701 12:27:21.839497  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetIP
	I0701 12:27:21.842255  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:21.842727  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:21.842764  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:21.843067  653531 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
	I0701 12:27:21.843309  653531 machine.go:94] provisionDockerMachine start ...
	I0701 12:27:21.843332  653531 main.go:141] libmachine: (ha-735960-m04) Calling .DriverName
	I0701 12:27:21.843625  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHHostname
	I0701 12:27:21.846158  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:21.846625  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:21.846658  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:21.846874  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHPort
	I0701 12:27:21.847122  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:21.847313  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:21.847496  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHUsername
	I0701 12:27:21.847763  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:27:21.847995  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0701 12:27:21.848012  653531 main.go:141] libmachine: About to run SSH command:
	hostname
	I0701 12:27:21.958527  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0701 12:27:21.958560  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetMachineName
	I0701 12:27:21.958896  653531 buildroot.go:166] provisioning hostname "ha-735960-m04"
	I0701 12:27:21.958928  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetMachineName
	I0701 12:27:21.959168  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHHostname
	I0701 12:27:21.961718  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:21.962176  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:21.962212  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:21.962410  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHPort
	I0701 12:27:21.962629  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:21.962804  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:21.962930  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHUsername
	I0701 12:27:21.963089  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:27:21.963293  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0701 12:27:21.963311  653531 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-735960-m04 && echo "ha-735960-m04" | sudo tee /etc/hostname
	I0701 12:27:22.089150  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-735960-m04
	
	I0701 12:27:22.089185  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHHostname
	I0701 12:27:22.092352  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:22.092805  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:22.092829  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:22.093059  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHPort
	I0701 12:27:22.093293  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:22.093532  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:22.093680  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHUsername
	I0701 12:27:22.093947  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:27:22.094124  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0701 12:27:22.094152  653531 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-735960-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-735960-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-735960-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0701 12:27:22.211873  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0701 12:27:22.211908  653531 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19166-630650/.minikube CaCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19166-630650/.minikube}
	I0701 12:27:22.211930  653531 buildroot.go:174] setting up certificates
	I0701 12:27:22.211938  653531 provision.go:84] configureAuth start
	I0701 12:27:22.211947  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetMachineName
	I0701 12:27:22.212269  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetIP
	I0701 12:27:22.215120  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:22.215523  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:22.215555  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:22.215810  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHHostname
	I0701 12:27:22.218161  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:22.218800  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:22.218836  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:22.219044  653531 provision.go:143] copyHostCerts
	I0701 12:27:22.219086  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem
	I0701 12:27:22.219130  653531 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem, removing ...
	I0701 12:27:22.219141  653531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem
	I0701 12:27:22.219226  653531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem (1078 bytes)
	I0701 12:27:22.219330  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem
	I0701 12:27:22.219356  653531 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem, removing ...
	I0701 12:27:22.219365  653531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem
	I0701 12:27:22.219402  653531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem (1123 bytes)
	I0701 12:27:22.219472  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem
	I0701 12:27:22.219497  653531 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem, removing ...
	I0701 12:27:22.219503  653531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem
	I0701 12:27:22.219534  653531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem (1675 bytes)
	I0701 12:27:22.219602  653531 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem org=jenkins.ha-735960-m04 san=[127.0.0.1 192.168.39.60 ha-735960-m04 localhost minikube]
	I0701 12:27:22.329827  653531 provision.go:177] copyRemoteCerts
	I0701 12:27:22.329892  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0701 12:27:22.329923  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHHostname
	I0701 12:27:22.332967  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:22.333373  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:22.333406  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:22.333651  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHPort
	I0701 12:27:22.333896  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:22.334062  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHUsername
	I0701 12:27:22.334281  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m04/id_rsa Username:docker}
	I0701 12:27:22.417286  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0701 12:27:22.417383  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0701 12:27:22.441229  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0701 12:27:22.441316  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0701 12:27:22.465192  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0701 12:27:22.465262  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0701 12:27:22.489482  653531 provision.go:87] duration metric: took 277.524425ms to configureAuth
	I0701 12:27:22.489525  653531 buildroot.go:189] setting minikube options for container-runtime
	I0701 12:27:22.489832  653531 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:27:22.489882  653531 main.go:141] libmachine: (ha-735960-m04) Calling .DriverName
	I0701 12:27:22.490191  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHHostname
	I0701 12:27:22.493387  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:22.493808  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:22.493842  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:22.494001  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHPort
	I0701 12:27:22.494272  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:22.494482  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:22.494666  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHUsername
	I0701 12:27:22.494871  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:27:22.495082  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0701 12:27:22.495096  653531 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0701 12:27:22.603693  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0701 12:27:22.603722  653531 buildroot.go:70] root file system type: tmpfs
	I0701 12:27:22.603868  653531 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0701 12:27:22.603921  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHHostname
	I0701 12:27:22.606932  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:22.607406  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:22.607441  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:22.607659  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHPort
	I0701 12:27:22.607881  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:22.608030  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:22.608161  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHUsername
	I0701 12:27:22.608332  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:27:22.608539  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0701 12:27:22.608607  653531 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.39.16"
	Environment="NO_PROXY=192.168.39.16,192.168.39.86"
	Environment="NO_PROXY=192.168.39.16,192.168.39.86,192.168.39.97"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0701 12:27:22.729176  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.39.16
	Environment=NO_PROXY=192.168.39.16,192.168.39.86
	Environment=NO_PROXY=192.168.39.16,192.168.39.86,192.168.39.97
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0701 12:27:22.729234  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHHostname
	I0701 12:27:22.732936  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:22.733425  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:22.733462  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:22.733653  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHPort
	I0701 12:27:22.733908  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:22.734181  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:22.734376  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHUsername
	I0701 12:27:22.734607  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:27:22.734842  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0701 12:27:22.734871  653531 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0701 12:27:24.534039  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0701 12:27:24.534075  653531 machine.go:97] duration metric: took 2.690748128s to provisionDockerMachine
	I0701 12:27:24.534091  653531 start.go:293] postStartSetup for "ha-735960-m04" (driver="kvm2")
	I0701 12:27:24.534104  653531 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0701 12:27:24.534123  653531 main.go:141] libmachine: (ha-735960-m04) Calling .DriverName
	I0701 12:27:24.534499  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0701 12:27:24.534541  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHHostname
	I0701 12:27:24.537254  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:24.537740  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:24.537779  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:24.537959  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHPort
	I0701 12:27:24.538181  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:24.538373  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHUsername
	I0701 12:27:24.538597  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m04/id_rsa Username:docker}
	I0701 12:27:24.622239  653531 ssh_runner.go:195] Run: cat /etc/os-release
	I0701 12:27:24.626566  653531 info.go:137] Remote host: Buildroot 2023.02.9
	I0701 12:27:24.626597  653531 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-630650/.minikube/addons for local assets ...
	I0701 12:27:24.626682  653531 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-630650/.minikube/files for local assets ...
	I0701 12:27:24.626776  653531 filesync.go:149] local asset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> 6378542.pem in /etc/ssl/certs
	I0701 12:27:24.626790  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> /etc/ssl/certs/6378542.pem
	I0701 12:27:24.626899  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0701 12:27:24.638615  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem --> /etc/ssl/certs/6378542.pem (1708 bytes)
	I0701 12:27:24.662568  653531 start.go:296] duration metric: took 128.459164ms for postStartSetup
	I0701 12:27:24.662618  653531 main.go:141] libmachine: (ha-735960-m04) Calling .DriverName
	I0701 12:27:24.663010  653531 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0701 12:27:24.663051  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHHostname
	I0701 12:27:24.665748  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:24.666087  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:24.666114  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:24.666265  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHPort
	I0701 12:27:24.666549  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:24.666727  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHUsername
	I0701 12:27:24.666943  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m04/id_rsa Username:docker}
	I0701 12:27:24.753987  653531 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0701 12:27:24.754081  653531 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0701 12:27:24.791910  653531 fix.go:56] duration metric: took 18.863722464s for fixHost
	I0701 12:27:24.791970  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHHostname
	I0701 12:27:24.795473  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:24.795824  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:24.795860  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:24.796063  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHPort
	I0701 12:27:24.796321  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:24.796518  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:24.796690  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHUsername
	I0701 12:27:24.796892  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:27:24.797130  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0701 12:27:24.797146  653531 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0701 12:27:24.911069  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719836844.884316737
	
	I0701 12:27:24.911100  653531 fix.go:216] guest clock: 1719836844.884316737
	I0701 12:27:24.911110  653531 fix.go:229] Guest: 2024-07-01 12:27:24.884316737 +0000 UTC Remote: 2024-07-01 12:27:24.791945819 +0000 UTC m=+202.261797488 (delta=92.370918ms)
	I0701 12:27:24.911131  653531 fix.go:200] guest clock delta is within tolerance: 92.370918ms
	I0701 12:27:24.911137  653531 start.go:83] releasing machines lock for "ha-735960-m04", held for 18.982986548s
	I0701 12:27:24.911163  653531 main.go:141] libmachine: (ha-735960-m04) Calling .DriverName
	I0701 12:27:24.911481  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetIP
	I0701 12:27:24.914298  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:24.914691  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:24.914721  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:24.917119  653531 out.go:177] * Found network options:
	I0701 12:27:24.918569  653531 out.go:177]   - NO_PROXY=192.168.39.16,192.168.39.86,192.168.39.97
	W0701 12:27:24.919961  653531 proxy.go:119] fail to check proxy env: Error ip not in block
	W0701 12:27:24.919987  653531 proxy.go:119] fail to check proxy env: Error ip not in block
	W0701 12:27:24.919997  653531 proxy.go:119] fail to check proxy env: Error ip not in block
	I0701 12:27:24.920012  653531 main.go:141] libmachine: (ha-735960-m04) Calling .DriverName
	I0701 12:27:24.920847  653531 main.go:141] libmachine: (ha-735960-m04) Calling .DriverName
	I0701 12:27:24.921063  653531 main.go:141] libmachine: (ha-735960-m04) Calling .DriverName
	I0701 12:27:24.921170  653531 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0701 12:27:24.921210  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHHostname
	W0701 12:27:24.921252  653531 proxy.go:119] fail to check proxy env: Error ip not in block
	W0701 12:27:24.921277  653531 proxy.go:119] fail to check proxy env: Error ip not in block
	W0701 12:27:24.921290  653531 proxy.go:119] fail to check proxy env: Error ip not in block
	I0701 12:27:24.921364  653531 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0701 12:27:24.921385  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHHostname
	I0701 12:27:24.924253  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:24.924561  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:24.924715  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:24.924742  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:24.924933  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHPort
	I0701 12:27:24.925058  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:24.925080  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:24.925110  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:24.925325  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHUsername
	I0701 12:27:24.925339  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHPort
	I0701 12:27:24.925519  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:24.925615  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m04/id_rsa Username:docker}
	I0701 12:27:24.925685  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHUsername
	I0701 12:27:24.925840  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m04/id_rsa Username:docker}
	W0701 12:27:25.004044  653531 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0701 12:27:25.004109  653531 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0701 12:27:25.029712  653531 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0701 12:27:25.029746  653531 start.go:494] detecting cgroup driver to use...
	I0701 12:27:25.029880  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 12:27:25.052034  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0701 12:27:25.062847  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0701 12:27:25.073005  653531 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0701 12:27:25.073080  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0701 12:27:25.083300  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0701 12:27:25.093834  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0701 12:27:25.104814  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0701 12:27:25.115006  653531 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0701 12:27:25.126080  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0701 12:27:25.136492  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0701 12:27:25.147986  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0701 12:27:25.158638  653531 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0701 12:27:25.168301  653531 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0701 12:27:25.177427  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:27:25.290645  653531 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0701 12:27:25.317946  653531 start.go:494] detecting cgroup driver to use...
	I0701 12:27:25.318090  653531 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0701 12:27:25.333522  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 12:27:25.349308  653531 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0701 12:27:25.366057  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 12:27:25.379554  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 12:27:25.393005  653531 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0701 12:27:25.427883  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 12:27:25.443710  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 12:27:25.462653  653531 ssh_runner.go:195] Run: which cri-dockerd
	I0701 12:27:25.466440  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0701 12:27:25.475817  653531 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0701 12:27:25.491900  653531 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0701 12:27:25.609810  653531 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0701 12:27:25.736607  653531 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0701 12:27:25.736666  653531 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0701 12:27:25.753218  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:27:25.872913  653531 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0701 12:27:28.274644  653531 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.401692528s)
	I0701 12:27:28.274730  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0701 12:27:28.288270  653531 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0701 12:27:28.306360  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0701 12:27:28.320063  653531 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0701 12:27:28.444909  653531 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0701 12:27:28.582500  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:27:28.708064  653531 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0701 12:27:28.728173  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0701 12:27:28.743660  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:27:28.873765  653531 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0701 12:27:28.960958  653531 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0701 12:27:28.961063  653531 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0701 12:27:28.967089  653531 start.go:562] Will wait 60s for crictl version
	I0701 12:27:28.967205  653531 ssh_runner.go:195] Run: which crictl
	I0701 12:27:28.971404  653531 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0701 12:27:29.011615  653531 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.1
	RuntimeApiVersion:  v1
	I0701 12:27:29.011699  653531 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0701 12:27:29.041339  653531 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0701 12:27:29.073461  653531 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.1 ...
	I0701 12:27:29.075110  653531 out.go:177]   - env NO_PROXY=192.168.39.16
	I0701 12:27:29.076621  653531 out.go:177]   - env NO_PROXY=192.168.39.16,192.168.39.86
	I0701 12:27:29.078186  653531 out.go:177]   - env NO_PROXY=192.168.39.16,192.168.39.86,192.168.39.97
	I0701 12:27:29.079949  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetIP
	I0701 12:27:29.083268  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:29.083683  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:29.083711  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:29.084018  653531 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0701 12:27:29.088562  653531 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 12:27:29.105010  653531 mustload.go:65] Loading cluster: ha-735960
	I0701 12:27:29.105303  653531 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:27:29.105654  653531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:27:29.105708  653531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:27:29.121628  653531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39389
	I0701 12:27:29.122222  653531 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:27:29.122816  653531 main.go:141] libmachine: Using API Version  1
	I0701 12:27:29.122844  653531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:27:29.123210  653531 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:27:29.123475  653531 main.go:141] libmachine: (ha-735960) Calling .GetState
	I0701 12:27:29.125364  653531 host.go:66] Checking if "ha-735960" exists ...
	I0701 12:27:29.125670  653531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:27:29.125708  653531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:27:29.141532  653531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36053
	I0701 12:27:29.142051  653531 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:27:29.142638  653531 main.go:141] libmachine: Using API Version  1
	I0701 12:27:29.142662  653531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:27:29.143010  653531 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:27:29.143254  653531 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:27:29.143488  653531 certs.go:68] Setting up /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960 for IP: 192.168.39.60
	I0701 12:27:29.143501  653531 certs.go:194] generating shared ca certs ...
	I0701 12:27:29.143518  653531 certs.go:226] acquiring lock for ca certs: {Name:mk34e166bfd069e523b2325e14d1812c523bff53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:27:29.143646  653531 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key
	I0701 12:27:29.143686  653531 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key
	I0701 12:27:29.143702  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0701 12:27:29.143722  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0701 12:27:29.143739  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0701 12:27:29.143757  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0701 12:27:29.143817  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem (1338 bytes)
	W0701 12:27:29.143851  653531 certs.go:480] ignoring /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854_empty.pem, impossibly tiny 0 bytes
	I0701 12:27:29.143871  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem (1675 bytes)
	I0701 12:27:29.143894  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem (1078 bytes)
	I0701 12:27:29.143916  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem (1123 bytes)
	I0701 12:27:29.143937  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem (1675 bytes)
	I0701 12:27:29.143972  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem (1708 bytes)
	I0701 12:27:29.144004  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:27:29.144021  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem -> /usr/share/ca-certificates/637854.pem
	I0701 12:27:29.144041  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> /usr/share/ca-certificates/6378542.pem
	I0701 12:27:29.144072  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0701 12:27:29.171419  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0701 12:27:29.196509  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0701 12:27:29.222599  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0701 12:27:29.248989  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0701 12:27:29.275034  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem --> /usr/share/ca-certificates/637854.pem (1338 bytes)
	I0701 12:27:29.300102  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem --> /usr/share/ca-certificates/6378542.pem (1708 bytes)
	I0701 12:27:29.327329  653531 ssh_runner.go:195] Run: openssl version
	I0701 12:27:29.333121  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0701 12:27:29.344555  653531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:27:29.349319  653531 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  1 12:05 /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:27:29.349394  653531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:27:29.355247  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0701 12:27:29.366285  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/637854.pem && ln -fs /usr/share/ca-certificates/637854.pem /etc/ssl/certs/637854.pem"
	I0701 12:27:29.376931  653531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/637854.pem
	I0701 12:27:29.381303  653531 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  1 12:11 /usr/share/ca-certificates/637854.pem
	I0701 12:27:29.381385  653531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/637854.pem
	I0701 12:27:29.387458  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/637854.pem /etc/ssl/certs/51391683.0"
	I0701 12:27:29.398343  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6378542.pem && ln -fs /usr/share/ca-certificates/6378542.pem /etc/ssl/certs/6378542.pem"
	I0701 12:27:29.409321  653531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6378542.pem
	I0701 12:27:29.414299  653531 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  1 12:11 /usr/share/ca-certificates/6378542.pem
	I0701 12:27:29.414400  653531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6378542.pem
	I0701 12:27:29.420975  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6378542.pem /etc/ssl/certs/3ec20f2e.0"
	I0701 12:27:29.434286  653531 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0701 12:27:29.438767  653531 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0701 12:27:29.438817  653531 kubeadm.go:928] updating node {m04 192.168.39.60 0 v1.30.2 docker false true} ...
	I0701 12:27:29.438918  653531 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-735960-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.60
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-735960 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0701 12:27:29.438988  653531 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0701 12:27:29.450811  653531 binaries.go:44] Found k8s binaries, skipping transfer
	I0701 12:27:29.450895  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0701 12:27:29.462511  653531 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0701 12:27:29.480246  653531 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0701 12:27:29.497624  653531 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0701 12:27:29.502554  653531 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 12:27:29.515005  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:27:29.648948  653531 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0701 12:27:29.668809  653531 start.go:234] Will wait 6m0s for node &{Name:m04 IP:192.168.39.60 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0701 12:27:29.669186  653531 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:27:29.671772  653531 out.go:177] * Verifying Kubernetes components...
	I0701 12:27:29.673288  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:27:29.823420  653531 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0701 12:27:29.839349  653531 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19166-630650/kubeconfig
	I0701 12:27:29.839675  653531 kapi.go:59] client config for ha-735960: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/client.crt", KeyFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/client.key", CAFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfbb80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0701 12:27:29.839746  653531 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.16:8443
	I0701 12:27:29.840001  653531 node_ready.go:35] waiting up to 6m0s for node "ha-735960-m04" to be "Ready" ...
	I0701 12:27:29.840108  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m04
	I0701 12:27:29.840118  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:29.840130  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:29.840138  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:29.843740  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:30.340654  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m04
	I0701 12:27:30.340679  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:30.340687  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:30.340691  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:30.344079  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:30.344547  653531 node_ready.go:49] node "ha-735960-m04" has status "Ready":"True"
	I0701 12:27:30.344570  653531 node_ready.go:38] duration metric: took 504.547887ms for node "ha-735960-m04" to be "Ready" ...
	I0701 12:27:30.344579  653531 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 12:27:30.344650  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods
	I0701 12:27:30.344660  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:30.344668  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:30.344675  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:30.351108  653531 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0701 12:27:30.358660  653531 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:30.358749  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:27:30.358758  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:30.358766  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:30.358771  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:30.362032  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:30.362784  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:27:30.362802  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:30.362812  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:30.362816  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:30.365450  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:27:30.365914  653531 pod_ready.go:92] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:30.365936  653531 pod_ready.go:81] duration metric: took 7.248792ms for pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:30.365949  653531 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-p4rtz" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:30.366016  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-p4rtz
	I0701 12:27:30.366025  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:30.366035  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:30.366043  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:30.368928  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:27:30.369820  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:27:30.369836  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:30.369843  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:30.369858  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:30.373004  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:30.373769  653531 pod_ready.go:92] pod "coredns-7db6d8ff4d-p4rtz" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:30.373785  653531 pod_ready.go:81] duration metric: took 7.830149ms for pod "coredns-7db6d8ff4d-p4rtz" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:30.373794  653531 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:30.373848  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960
	I0701 12:27:30.373856  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:30.373862  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:30.373867  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:30.376565  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:27:30.377340  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:27:30.377356  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:30.377363  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:30.377367  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:30.379523  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:27:30.379966  653531 pod_ready.go:92] pod "etcd-ha-735960" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:30.379982  653531 pod_ready.go:81] duration metric: took 6.178731ms for pod "etcd-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:30.379991  653531 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:30.380048  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m02
	I0701 12:27:30.380055  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:30.380062  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:30.380069  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:30.382485  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:27:30.383125  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:27:30.383141  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:30.383148  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:30.383155  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:30.385845  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:27:30.386599  653531 pod_ready.go:92] pod "etcd-ha-735960-m02" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:30.386616  653531 pod_ready.go:81] duration metric: took 6.619715ms for pod "etcd-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:30.386624  653531 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:30.541077  653531 request.go:629] Waited for 154.380092ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:27:30.541196  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:27:30.541207  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:30.541219  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:30.541229  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:30.544660  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:30.740754  653531 request.go:629] Waited for 195.337132ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:30.740847  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:30.740857  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:30.740865  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:30.740869  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:30.744492  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:30.745072  653531 pod_ready.go:92] pod "etcd-ha-735960-m03" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:30.745094  653531 pod_ready.go:81] duration metric: took 358.462325ms for pod "etcd-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:30.745123  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:30.941364  653531 request.go:629] Waited for 196.100673ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960
	I0701 12:27:30.941453  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960
	I0701 12:27:30.941466  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:30.941477  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:30.941487  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:30.946577  653531 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0701 12:27:31.140711  653531 request.go:629] Waited for 193.223112ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:27:31.140788  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:27:31.140793  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:31.140800  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:31.140804  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:31.146571  653531 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0701 12:27:31.147245  653531 pod_ready.go:92] pod "kube-apiserver-ha-735960" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:31.147269  653531 pod_ready.go:81] duration metric: took 402.135058ms for pod "kube-apiserver-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:31.147280  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:31.341367  653531 request.go:629] Waited for 193.988845ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m02
	I0701 12:27:31.341477  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m02
	I0701 12:27:31.341489  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:31.341500  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:31.341508  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:31.345561  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:27:31.540709  653531 request.go:629] Waited for 194.115472ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:27:31.540784  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:27:31.540789  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:31.540797  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:31.540800  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:31.544920  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:27:31.545652  653531 pod_ready.go:92] pod "kube-apiserver-ha-735960-m02" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:31.545679  653531 pod_ready.go:81] duration metric: took 398.391166ms for pod "kube-apiserver-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:31.545689  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:31.741170  653531 request.go:629] Waited for 195.369232ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:27:31.741243  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:27:31.741251  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:31.741261  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:31.741272  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:31.745382  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:27:31.941422  653531 request.go:629] Waited for 195.397431ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:31.941512  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:31.941517  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:31.941526  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:31.941531  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:31.945358  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:31.945947  653531 pod_ready.go:92] pod "kube-apiserver-ha-735960-m03" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:31.945971  653531 pod_ready.go:81] duration metric: took 400.276204ms for pod "kube-apiserver-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:31.945982  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:32.140926  653531 request.go:629] Waited for 194.860847ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-735960
	I0701 12:27:32.141014  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-735960
	I0701 12:27:32.141023  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:32.141048  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:32.141058  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:32.146741  653531 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0701 12:27:32.341040  653531 request.go:629] Waited for 193.334578ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:27:32.341112  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:27:32.341117  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:32.341126  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:32.341132  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:32.344664  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:32.345182  653531 pod_ready.go:92] pod "kube-controller-manager-ha-735960" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:32.345200  653531 pod_ready.go:81] duration metric: took 399.209545ms for pod "kube-controller-manager-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:32.345210  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:32.541314  653531 request.go:629] Waited for 196.016373ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-735960-m02
	I0701 12:27:32.541395  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-735960-m02
	I0701 12:27:32.541402  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:32.541414  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:32.541424  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:32.545663  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:27:32.741118  653531 request.go:629] Waited for 194.597088ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:27:32.741201  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:27:32.741209  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:32.741220  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:32.741228  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:32.745051  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:32.745612  653531 pod_ready.go:92] pod "kube-controller-manager-ha-735960-m02" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:32.745636  653531 pod_ready.go:81] duration metric: took 400.417224ms for pod "kube-controller-manager-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:32.745651  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:32.941594  653531 request.go:629] Waited for 195.859048ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-735960-m03
	I0701 12:27:32.941697  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-735960-m03
	I0701 12:27:32.941704  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:32.941712  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:32.941720  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:32.945661  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:33.140796  653531 request.go:629] Waited for 194.297237ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:33.140872  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:33.140881  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:33.140892  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:33.140902  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:33.148523  653531 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0701 12:27:33.149119  653531 pod_ready.go:92] pod "kube-controller-manager-ha-735960-m03" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:33.149229  653531 pod_ready.go:81] duration metric: took 403.561455ms for pod "kube-controller-manager-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:33.149274  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-25ssf" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:33.341103  653531 request.go:629] Waited for 191.712414ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-25ssf
	I0701 12:27:33.341203  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-25ssf
	I0701 12:27:33.341211  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:33.341222  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:33.341236  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:33.345005  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:33.541118  653531 request.go:629] Waited for 195.201433ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m04
	I0701 12:27:33.541195  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m04
	I0701 12:27:33.541202  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:33.541212  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:33.541220  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:33.544937  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:33.546208  653531 pod_ready.go:92] pod "kube-proxy-25ssf" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:33.546231  653531 pod_ready.go:81] duration metric: took 396.932438ms for pod "kube-proxy-25ssf" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:33.546244  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-776rt" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:33.741353  653531 request.go:629] Waited for 195.026851ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-776rt
	I0701 12:27:33.741456  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-776rt
	I0701 12:27:33.741466  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:33.741475  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:33.741481  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:33.745239  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:33.941300  653531 request.go:629] Waited for 195.397929ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:33.941381  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:33.941388  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:33.941399  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:33.941408  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:33.944917  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:33.945530  653531 pod_ready.go:92] pod "kube-proxy-776rt" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:33.945551  653531 pod_ready.go:81] duration metric: took 399.299813ms for pod "kube-proxy-776rt" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:33.945565  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b6knb" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:34.140984  653531 request.go:629] Waited for 195.324742ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b6knb
	I0701 12:27:34.141050  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b6knb
	I0701 12:27:34.141055  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:34.141063  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:34.141075  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:34.144882  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:34.341131  653531 request.go:629] Waited for 195.426765ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:27:34.341198  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:27:34.341203  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:34.341211  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:34.341215  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:34.344938  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:34.345533  653531 pod_ready.go:92] pod "kube-proxy-b6knb" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:34.345554  653531 pod_ready.go:81] duration metric: took 399.982623ms for pod "kube-proxy-b6knb" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:34.345563  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lphzn" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:34.540691  653531 request.go:629] Waited for 195.046851ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lphzn
	I0701 12:27:34.540777  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lphzn
	I0701 12:27:34.540782  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:34.540794  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:34.540798  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:34.544410  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:34.741782  653531 request.go:629] Waited for 196.474041ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:27:34.741851  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:27:34.741856  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:34.741864  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:34.741869  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:34.745447  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:34.746289  653531 pod_ready.go:92] pod "kube-proxy-lphzn" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:34.746312  653531 pod_ready.go:81] duration metric: took 400.742893ms for pod "kube-proxy-lphzn" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:34.746344  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:34.941411  653531 request.go:629] Waited for 194.97877ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960
	I0701 12:27:34.941489  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960
	I0701 12:27:34.941495  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:34.941502  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:34.941510  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:34.944984  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:35.141079  653531 request.go:629] Waited for 195.409668ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:27:35.141163  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:27:35.141168  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:35.141176  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:35.141194  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:35.144737  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:35.145431  653531 pod_ready.go:92] pod "kube-scheduler-ha-735960" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:35.145471  653531 pod_ready.go:81] duration metric: took 399.115782ms for pod "kube-scheduler-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:35.145485  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:35.341554  653531 request.go:629] Waited for 195.979537ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960-m02
	I0701 12:27:35.341639  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960-m02
	I0701 12:27:35.341650  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:35.341661  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:35.341672  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:35.345199  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:35.541252  653531 request.go:629] Waited for 195.403848ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:27:35.541340  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:27:35.541346  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:35.541354  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:35.541362  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:35.545398  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:27:35.546010  653531 pod_ready.go:92] pod "kube-scheduler-ha-735960-m02" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:35.546037  653531 pod_ready.go:81] duration metric: took 400.543297ms for pod "kube-scheduler-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:35.546051  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:35.741442  653531 request.go:629] Waited for 195.294004ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960-m03
	I0701 12:27:35.741533  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960-m03
	I0701 12:27:35.741541  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:35.741553  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:35.741565  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:35.744725  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:35.940687  653531 request.go:629] Waited for 195.284608ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:35.940760  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:35.940766  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:35.940776  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:35.940783  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:35.944482  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:35.945011  653531 pod_ready.go:92] pod "kube-scheduler-ha-735960-m03" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:35.945032  653531 pod_ready.go:81] duration metric: took 398.973476ms for pod "kube-scheduler-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:35.945048  653531 pod_ready.go:38] duration metric: took 5.600458409s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 12:27:35.945074  653531 system_svc.go:44] waiting for kubelet service to be running ....
	I0701 12:27:35.945143  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 12:27:35.962762  653531 system_svc.go:56] duration metric: took 17.680549ms WaitForService to wait for kubelet
	I0701 12:27:35.962795  653531 kubeadm.go:576] duration metric: took 6.293928606s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 12:27:35.962817  653531 node_conditions.go:102] verifying NodePressure condition ...
	I0701 12:27:36.141286  653531 request.go:629] Waited for 178.366419ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes
	I0701 12:27:36.141375  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes
	I0701 12:27:36.141382  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:36.141394  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:36.141404  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:36.145426  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:36.146951  653531 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0701 12:27:36.146977  653531 node_conditions.go:123] node cpu capacity is 2
	I0701 12:27:36.146989  653531 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0701 12:27:36.146992  653531 node_conditions.go:123] node cpu capacity is 2
	I0701 12:27:36.146996  653531 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0701 12:27:36.146999  653531 node_conditions.go:123] node cpu capacity is 2
	I0701 12:27:36.147001  653531 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0701 12:27:36.147004  653531 node_conditions.go:123] node cpu capacity is 2
	I0701 12:27:36.147009  653531 node_conditions.go:105] duration metric: took 184.187151ms to run NodePressure ...
	I0701 12:27:36.147024  653531 start.go:240] waiting for startup goroutines ...
	I0701 12:27:36.147054  653531 start.go:254] writing updated cluster config ...
	I0701 12:27:36.147403  653531 ssh_runner.go:195] Run: rm -f paused
	I0701 12:27:36.201170  653531 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0701 12:27:36.203376  653531 out.go:177] * Done! kubectl is now configured to use "ha-735960" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jul 01 12:25:13 ha-735960 cri-dockerd[1398]: time="2024-07-01T12:25:13Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Jul 01 12:25:23 ha-735960 dockerd[1125]: time="2024-07-01T12:25:23.366654170Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 01 12:25:23 ha-735960 dockerd[1125]: time="2024-07-01T12:25:23.366710385Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 01 12:25:23 ha-735960 dockerd[1125]: time="2024-07-01T12:25:23.366723641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 12:25:23 ha-735960 dockerd[1125]: time="2024-07-01T12:25:23.367696676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 12:25:23 ha-735960 dockerd[1125]: time="2024-07-01T12:25:23.388479723Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 01 12:25:23 ha-735960 dockerd[1125]: time="2024-07-01T12:25:23.388593936Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 01 12:25:23 ha-735960 dockerd[1125]: time="2024-07-01T12:25:23.389018347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 12:25:23 ha-735960 dockerd[1125]: time="2024-07-01T12:25:23.389381366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 12:25:23 ha-735960 dockerd[1125]: time="2024-07-01T12:25:23.390771396Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 01 12:25:23 ha-735960 dockerd[1125]: time="2024-07-01T12:25:23.391192786Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 01 12:25:23 ha-735960 dockerd[1125]: time="2024-07-01T12:25:23.391291548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 12:25:23 ha-735960 dockerd[1125]: time="2024-07-01T12:25:23.391685449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 12:25:24 ha-735960 dockerd[1125]: time="2024-07-01T12:25:24.321168284Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 01 12:25:24 ha-735960 dockerd[1125]: time="2024-07-01T12:25:24.321255362Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 01 12:25:24 ha-735960 dockerd[1125]: time="2024-07-01T12:25:24.321269990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 12:25:24 ha-735960 dockerd[1125]: time="2024-07-01T12:25:24.321347198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 12:25:25 ha-735960 dockerd[1125]: time="2024-07-01T12:25:25.309227018Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 01 12:25:25 ha-735960 dockerd[1125]: time="2024-07-01T12:25:25.309334545Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 01 12:25:25 ha-735960 dockerd[1125]: time="2024-07-01T12:25:25.309346230Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 12:25:25 ha-735960 dockerd[1125]: time="2024-07-01T12:25:25.309972461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 12:25:26 ha-735960 dockerd[1125]: time="2024-07-01T12:25:26.350220788Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 01 12:25:26 ha-735960 dockerd[1125]: time="2024-07-01T12:25:26.350306647Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 01 12:25:26 ha-735960 dockerd[1125]: time="2024-07-01T12:25:26.350329844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 12:25:26 ha-735960 dockerd[1125]: time="2024-07-01T12:25:26.350448560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	51a34f4432461       6e38f40d628db                                                                                         2 minutes ago       Running             storage-provisioner       1                   d2dc46de092d5       storage-provisioner
	bf788c37e0912       ac1c61439df46                                                                                         2 minutes ago       Running             kindnet-cni               1                   afbde11b8a740       kindnet-7f6hm
	8cdf2026ed072       8c811b4aec35f                                                                                         2 minutes ago       Running             busybox                   1                   7d907d7b28c98       busybox-fc5497c4f-pjfcw
	710f5c3a9f856       53c535741fb44                                                                                         2 minutes ago       Running             kube-proxy                1                   e49ff3fb80595       kube-proxy-lphzn
	61dc29970290b       cbb01a7bd410d                                                                                         2 minutes ago       Running             coredns                   1                   de1daec45ac89       coredns-7db6d8ff4d-p4rtz
	4a151786b08f5       cbb01a7bd410d                                                                                         2 minutes ago       Running             coredns                   1                   26981372e6136       coredns-7db6d8ff4d-nk4lf
	8ee3e44a43c3b       56ce0fd9fb532                                                                                         2 minutes ago       Running             kube-apiserver            5                   1b92afc0e4763       kube-apiserver-ha-735960
	67dc946c8c45c       e874818b3caac                                                                                         2 minutes ago       Running             kube-controller-manager   5                   3379ae4b4d689       kube-controller-manager-ha-735960
	1c046b029aa4a       38af8ddebf499                                                                                         3 minutes ago       Running             kube-vip                  1                   32c93b266a82d       kube-vip-ha-735960
	693eb0b8f5d78       7820c83aa1394                                                                                         3 minutes ago       Running             kube-scheduler            2                   ec2e5d106b539       kube-scheduler-ha-735960
	ec2c061093f10       e874818b3caac                                                                                         3 minutes ago       Exited              kube-controller-manager   4                   3379ae4b4d689       kube-controller-manager-ha-735960
	852492f61fee7       3861cfcd7c04c                                                                                         3 minutes ago       Running             etcd                      2                   c9044136ea747       etcd-ha-735960
	a3cb59ee8d572       56ce0fd9fb532                                                                                         3 minutes ago       Exited              kube-apiserver            4                   1b92afc0e4763       kube-apiserver-ha-735960
	cecb3dd12e16e       38af8ddebf499                                                                                         5 minutes ago       Exited              kube-vip                  0                   8d1562fb4b8c3       kube-vip-ha-735960
	6a200a6b49020       3861cfcd7c04c                                                                                         5 minutes ago       Exited              etcd                      1                   5b1097d48d724       etcd-ha-735960
	2d71437c5f06d       7820c83aa1394                                                                                         5 minutes ago       Exited              kube-scheduler            1                   fa7dea6a1b8bd       kube-scheduler-ha-735960
	1ef6d9da6a9c5       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   9 minutes ago       Exited              busybox                   0                   1f5ccc7b0e655       busybox-fc5497c4f-pjfcw
	a9c30cd4b3455       cbb01a7bd410d                                                                                         11 minutes ago      Exited              coredns                   0                   7b4b4f7ec4b63       coredns-7db6d8ff4d-nk4lf
	769b0b8751350       cbb01a7bd410d                                                                                         11 minutes ago      Exited              coredns                   0                   7a349370d4f88       coredns-7db6d8ff4d-p4rtz
	f472aef5302fd       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              11 minutes ago      Exited              kindnet-cni               0                   ab9c74a502295       kindnet-7f6hm
	6116abe6039dc       53c535741fb44                                                                                         11 minutes ago      Exited              kube-proxy                0                   da69191059798       kube-proxy-lphzn
	
	
	==> coredns [4a151786b08f] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:47509 - 49224 "HINFO IN 6979381009676685748.1822735874857968465. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.033568754s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[177456986]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (01-Jul-2024 12:25:23.743) (total time: 30001ms):
	Trace[177456986]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (12:25:53.744)
	Trace[177456986]: [30.001445665s] [30.001445665s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[947462717]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (01-Jul-2024 12:25:23.742) (total time: 30003ms):
	Trace[947462717]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (12:25:53.743)
	Trace[947462717]: [30.0032009s] [30.0032009s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[886534813]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (01-Jul-2024 12:25:23.742) (total time: 30004ms):
	Trace[886534813]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (12:25:53.745)
	Trace[886534813]: [30.004749172s] [30.004749172s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [61dc29970290] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:49574 - 32592 "HINFO IN 7534101530096432962.1842168600618500663. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017366932s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[2027452150]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (01-Jul-2024 12:25:23.742) (total time: 30003ms):
	Trace[2027452150]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (12:25:53.743)
	Trace[2027452150]: [30.003896779s] [30.003896779s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[222503702]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (01-Jul-2024 12:25:23.743) (total time: 30003ms):
	Trace[222503702]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (12:25:53.744)
	Trace[222503702]: [30.003901467s] [30.003901467s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1950728267]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (01-Jul-2024 12:25:23.742) (total time: 30005ms):
	Trace[1950728267]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (12:25:53.745)
	Trace[1950728267]: [30.005235099s] [30.005235099s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [769b0b875135] <==
	[INFO] 10.244.1.2:44221 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000082797s
	[INFO] 10.244.2.2:33797 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157729s
	[INFO] 10.244.2.2:52590 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004055351s
	[INFO] 10.244.2.2:46983 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003253494s
	[INFO] 10.244.2.2:56187 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000205215s
	[INFO] 10.244.2.2:41086 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000158307s
	[INFO] 10.244.0.4:47783 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000097077s
	[INFO] 10.244.0.4:50743 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001523s
	[INFO] 10.244.0.4:37141 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000138763s
	[INFO] 10.244.1.2:32981 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000132906s
	[INFO] 10.244.1.2:36762 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001646552s
	[INFO] 10.244.1.2:33583 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000072434s
	[INFO] 10.244.2.2:37027 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000156518s
	[INFO] 10.244.2.2:58435 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000104504s
	[INFO] 10.244.2.2:36107 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090251s
	[INFO] 10.244.0.4:44792 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000227164s
	[INFO] 10.244.0.4:56557 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000140925s
	[INFO] 10.244.1.2:38284 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000232717s
	[INFO] 10.244.2.2:37664 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135198s
	[INFO] 10.244.2.2:60876 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00032392s
	[INFO] 10.244.1.2:37461 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133264s
	[INFO] 10.244.1.2:45182 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000117372s
	[INFO] 10.244.1.2:37156 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000240093s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a9c30cd4b345] <==
	[INFO] 10.244.0.4:57095 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002251804s
	[INFO] 10.244.0.4:42381 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000081215s
	[INFO] 10.244.0.4:53499 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00124929s
	[INFO] 10.244.0.4:41287 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000174281s
	[INFO] 10.244.0.4:36433 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000142863s
	[INFO] 10.244.1.2:47688 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130034s
	[INFO] 10.244.1.2:40562 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00183587s
	[INFO] 10.244.1.2:35137 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000771s
	[INFO] 10.244.1.2:37798 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000184282s
	[INFO] 10.244.1.2:43876 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008807s
	[INFO] 10.244.2.2:35039 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119303s
	[INFO] 10.244.0.4:53229 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000090292s
	[INFO] 10.244.0.4:42097 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00011308s
	[INFO] 10.244.1.2:42114 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130767s
	[INFO] 10.244.1.2:56638 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000110707s
	[INFO] 10.244.1.2:55805 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000093484s
	[INFO] 10.244.2.2:51675 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000145117s
	[INFO] 10.244.2.2:56838 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000136843s
	[INFO] 10.244.0.4:60951 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000162889s
	[INFO] 10.244.0.4:34776 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000112367s
	[INFO] 10.244.0.4:45397 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000073771s
	[INFO] 10.244.0.4:52372 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000058127s
	[INFO] 10.244.1.2:41033 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000131962s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-735960
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-735960
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f87dc4b1adfa3faf30393d14b8f7fb9acc5e991c
	                    minikube.k8s.io/name=ha-735960
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_01T12_15_55_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Jul 2024 12:15:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-735960
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Jul 2024 12:27:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Jul 2024 12:25:13 +0000   Mon, 01 Jul 2024 12:15:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Jul 2024 12:25:13 +0000   Mon, 01 Jul 2024 12:15:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Jul 2024 12:25:13 +0000   Mon, 01 Jul 2024 12:15:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Jul 2024 12:25:13 +0000   Mon, 01 Jul 2024 12:16:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.16
	  Hostname:    ha-735960
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a500128d5645446baeea5654afbcb060
	  System UUID:                a500128d-5645-446b-aeea-5654afbcb060
	  Boot ID:                    a9ffe936-2356-415e-aa5e-ceedcf15ed72
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-pjfcw              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m10s
	  kube-system                 coredns-7db6d8ff4d-nk4lf             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     11m
	  kube-system                 coredns-7db6d8ff4d-p4rtz             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     11m
	  kube-system                 etcd-ha-735960                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-7f6hm                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-735960             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-735960    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-lphzn                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-735960             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-735960                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m32s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 11m                    kube-proxy       
	  Normal  Starting                 2m17s                  kube-proxy       
	  Normal  Starting                 11m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  11m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  11m                    kubelet          Node ha-735960 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m                    kubelet          Node ha-735960 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m                    kubelet          Node ha-735960 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           11m                    node-controller  Node ha-735960 event: Registered Node ha-735960 in Controller
	  Normal  NodeReady                11m                    kubelet          Node ha-735960 status is now: NodeReady
	  Normal  RegisteredNode           10m                    node-controller  Node ha-735960 event: Registered Node ha-735960 in Controller
	  Normal  RegisteredNode           9m15s                  node-controller  Node ha-735960 event: Registered Node ha-735960 in Controller
	  Normal  RegisteredNode           7m6s                   node-controller  Node ha-735960 event: Registered Node ha-735960 in Controller
	  Normal  NodeHasSufficientMemory  3m15s (x8 over 3m15s)  kubelet          Node ha-735960 status is now: NodeHasSufficientMemory
	  Normal  Starting                 3m15s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    3m15s (x8 over 3m15s)  kubelet          Node ha-735960 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m15s (x7 over 3m15s)  kubelet          Node ha-735960 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m28s                  node-controller  Node ha-735960 event: Registered Node ha-735960 in Controller
	  Normal  RegisteredNode           2m17s                  node-controller  Node ha-735960 event: Registered Node ha-735960 in Controller
	  Normal  RegisteredNode           41s                    node-controller  Node ha-735960 event: Registered Node ha-735960 in Controller
	
	
	Name:               ha-735960-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-735960-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f87dc4b1adfa3faf30393d14b8f7fb9acc5e991c
	                    minikube.k8s.io/name=ha-735960
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_01T12_17_00_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Jul 2024 12:16:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-735960-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Jul 2024 12:27:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Jul 2024 12:25:08 +0000   Mon, 01 Jul 2024 12:16:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Jul 2024 12:25:08 +0000   Mon, 01 Jul 2024 12:16:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Jul 2024 12:25:08 +0000   Mon, 01 Jul 2024 12:16:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Jul 2024 12:25:08 +0000   Mon, 01 Jul 2024 12:17:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.86
	  Hostname:    ha-735960-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 58cf4e4771994f2084a06f7d76199172
	  System UUID:                58cf4e47-7199-4f20-84a0-6f7d76199172
	  Boot ID:                    41c32de2-f03a-41e4-b332-91dc3dc2ccaf
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-twnb4                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m10s
	  kube-system                 etcd-ha-735960-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-bztzv                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-ha-735960-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-ha-735960-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-b6knb                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-ha-735960-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-vip-ha-735960-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m23s                  kube-proxy       
	  Normal   Starting                 7m19s                  kube-proxy       
	  Normal   Starting                 10m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node ha-735960-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node ha-735960-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)      kubelet          Node ha-735960-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                    node-controller  Node ha-735960-m02 event: Registered Node ha-735960-m02 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-735960-m02 event: Registered Node ha-735960-m02 in Controller
	  Normal   RegisteredNode           9m15s                  node-controller  Node ha-735960-m02 event: Registered Node ha-735960-m02 in Controller
	  Normal   Starting                 7m24s                  kubelet          Starting kubelet.
	  Warning  Rebooted                 7m24s                  kubelet          Node ha-735960-m02 has been rebooted, boot id: 64290a4a-a20d-436b-8567-0d3e8b822776
	  Normal   NodeHasSufficientPID     7m24s                  kubelet          Node ha-735960-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  7m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  7m24s                  kubelet          Node ha-735960-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m24s                  kubelet          Node ha-735960-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           7m6s                   node-controller  Node ha-735960-m02 event: Registered Node ha-735960-m02 in Controller
	  Normal   Starting                 2m51s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  2m51s (x8 over 2m51s)  kubelet          Node ha-735960-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m51s (x8 over 2m51s)  kubelet          Node ha-735960-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m51s (x7 over 2m51s)  kubelet          Node ha-735960-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  2m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           2m28s                  node-controller  Node ha-735960-m02 event: Registered Node ha-735960-m02 in Controller
	  Normal   RegisteredNode           2m17s                  node-controller  Node ha-735960-m02 event: Registered Node ha-735960-m02 in Controller
	  Normal   RegisteredNode           41s                    node-controller  Node ha-735960-m02 event: Registered Node ha-735960-m02 in Controller
	
	
	Name:               ha-735960-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-735960-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f87dc4b1adfa3faf30393d14b8f7fb9acc5e991c
	                    minikube.k8s.io/name=ha-735960
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_01T12_18_11_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Jul 2024 12:18:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-735960-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Jul 2024 12:27:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Jul 2024 12:26:42 +0000   Mon, 01 Jul 2024 12:26:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Jul 2024 12:26:42 +0000   Mon, 01 Jul 2024 12:26:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Jul 2024 12:26:42 +0000   Mon, 01 Jul 2024 12:26:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Jul 2024 12:26:42 +0000   Mon, 01 Jul 2024 12:26:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.97
	  Hostname:    ha-735960-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 995d5c3b59f847378d8e94e940e73ad6
	  System UUID:                995d5c3b-59f8-4737-8d8e-94e940e73ad6
	  Boot ID:                    bc7ccd53-413f-4b49-a89c-18c93eb90ad9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-cpsct                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m10s
	  kube-system                 etcd-ha-735960-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m32s
	  kube-system                 kindnet-2424m                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m34s
	  kube-system                 kube-apiserver-ha-735960-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m32s
	  kube-system                 kube-controller-manager-ha-735960-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m32s
	  kube-system                 kube-proxy-776rt                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m34s
	  kube-system                 kube-scheduler-ha-735960-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m32s
	  kube-system                 kube-vip-ha-735960-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 54s                    kube-proxy       
	  Normal   Starting                 9m29s                  kube-proxy       
	  Normal   NodeAllocatableEnforced  9m34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  9m34s (x8 over 9m34s)  kubelet          Node ha-735960-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m34s (x8 over 9m34s)  kubelet          Node ha-735960-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m34s (x7 over 9m34s)  kubelet          Node ha-735960-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           9m31s                  node-controller  Node ha-735960-m03 event: Registered Node ha-735960-m03 in Controller
	  Normal   RegisteredNode           9m30s                  node-controller  Node ha-735960-m03 event: Registered Node ha-735960-m03 in Controller
	  Normal   RegisteredNode           9m15s                  node-controller  Node ha-735960-m03 event: Registered Node ha-735960-m03 in Controller
	  Normal   RegisteredNode           7m6s                   node-controller  Node ha-735960-m03 event: Registered Node ha-735960-m03 in Controller
	  Normal   RegisteredNode           2m28s                  node-controller  Node ha-735960-m03 event: Registered Node ha-735960-m03 in Controller
	  Normal   RegisteredNode           2m17s                  node-controller  Node ha-735960-m03 event: Registered Node ha-735960-m03 in Controller
	  Normal   NodeNotReady             108s                   node-controller  Node ha-735960-m03 status is now: NodeNotReady
	  Normal   Starting                 59s                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  59s                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  59s (x3 over 59s)      kubelet          Node ha-735960-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    59s (x3 over 59s)      kubelet          Node ha-735960-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     59s (x3 over 59s)      kubelet          Node ha-735960-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 59s (x2 over 59s)      kubelet          Node ha-735960-m03 has been rebooted, boot id: bc7ccd53-413f-4b49-a89c-18c93eb90ad9
	  Normal   NodeReady                59s (x2 over 59s)      kubelet          Node ha-735960-m03 status is now: NodeReady
	  Normal   RegisteredNode           41s                    node-controller  Node ha-735960-m03 event: Registered Node ha-735960-m03 in Controller
	
	
	Name:               ha-735960-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-735960-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f87dc4b1adfa3faf30393d14b8f7fb9acc5e991c
	                    minikube.k8s.io/name=ha-735960
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_01T12_19_10_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Jul 2024 12:19:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-735960-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Jul 2024 12:27:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Jul 2024 12:27:30 +0000   Mon, 01 Jul 2024 12:27:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Jul 2024 12:27:30 +0000   Mon, 01 Jul 2024 12:27:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Jul 2024 12:27:30 +0000   Mon, 01 Jul 2024 12:27:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Jul 2024 12:27:30 +0000   Mon, 01 Jul 2024 12:27:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.60
	  Hostname:    ha-735960-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fd9ce62e425d4b9a9ba9ce7045362f6f
	  System UUID:                fd9ce62e-425d-4b9a-9ba9-ce7045362f6f
	  Boot ID:                    ac395c38-b578-4b7c-8c31-9939ff570d11
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-6gx8s       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m32s
	  kube-system                 kube-proxy-25ssf    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 8m25s                  kube-proxy       
	  Normal   Starting                 10s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  8m32s (x2 over 8m32s)  kubelet          Node ha-735960-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m32s (x2 over 8m32s)  kubelet          Node ha-735960-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m32s (x2 over 8m32s)  kubelet          Node ha-735960-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  8m32s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           8m31s                  node-controller  Node ha-735960-m04 event: Registered Node ha-735960-m04 in Controller
	  Normal   RegisteredNode           8m30s                  node-controller  Node ha-735960-m04 event: Registered Node ha-735960-m04 in Controller
	  Normal   RegisteredNode           8m30s                  node-controller  Node ha-735960-m04 event: Registered Node ha-735960-m04 in Controller
	  Normal   NodeReady                8m20s                  kubelet          Node ha-735960-m04 status is now: NodeReady
	  Normal   RegisteredNode           7m6s                   node-controller  Node ha-735960-m04 event: Registered Node ha-735960-m04 in Controller
	  Normal   RegisteredNode           2m28s                  node-controller  Node ha-735960-m04 event: Registered Node ha-735960-m04 in Controller
	  Normal   RegisteredNode           2m17s                  node-controller  Node ha-735960-m04 event: Registered Node ha-735960-m04 in Controller
	  Normal   NodeNotReady             108s                   node-controller  Node ha-735960-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           41s                    node-controller  Node ha-735960-m04 event: Registered Node ha-735960-m04 in Controller
	  Normal   Starting                 12s                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  12s                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  11s (x2 over 11s)      kubelet          Node ha-735960-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11s (x2 over 11s)      kubelet          Node ha-735960-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11s (x2 over 11s)      kubelet          Node ha-735960-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 11s                    kubelet          Node ha-735960-m04 has been rebooted, boot id: ac395c38-b578-4b7c-8c31-9939ff570d11
	  Normal   NodeReady                11s                    kubelet          Node ha-735960-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050613] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036847] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.466422] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.742414] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.542503] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.890956] systemd-fstab-generator[479]: Ignoring "noauto" option for root device
	[  +0.054969] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.050473] systemd-fstab-generator[491]: Ignoring "noauto" option for root device
	[  +2.186564] systemd-fstab-generator[1047]: Ignoring "noauto" option for root device
	[  +0.281745] systemd-fstab-generator[1084]: Ignoring "noauto" option for root device
	[  +0.110826] systemd-fstab-generator[1096]: Ignoring "noauto" option for root device
	[  +0.123894] systemd-fstab-generator[1110]: Ignoring "noauto" option for root device
	[  +2.248144] kauditd_printk_skb: 195 callbacks suppressed
	[  +0.296890] systemd-fstab-generator[1351]: Ignoring "noauto" option for root device
	[  +0.110572] systemd-fstab-generator[1363]: Ignoring "noauto" option for root device
	[  +0.111234] systemd-fstab-generator[1375]: Ignoring "noauto" option for root device
	[  +0.128120] systemd-fstab-generator[1390]: Ignoring "noauto" option for root device
	[  +0.483978] systemd-fstab-generator[1543]: Ignoring "noauto" option for root device
	[  +6.839985] kauditd_printk_skb: 176 callbacks suppressed
	[ +10.416982] kauditd_printk_skb: 40 callbacks suppressed
	[Jul 1 12:25] kauditd_printk_skb: 30 callbacks suppressed
	[ +36.086285] kauditd_printk_skb: 48 callbacks suppressed
	
	
	==> etcd [6a200a6b4902] <==
	{"level":"info","ts":"2024-07-01T12:23:54.888482Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:54.888629Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:54.888657Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 received MsgPreVoteResp from b6c76b3131c1024 at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:54.888687Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 [logterm: 2, index: 1880] sent MsgPreVote request to 77557cf66c24e9ff at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:54.88881Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 [logterm: 2, index: 1880] sent MsgPreVote request to c77bbbee62c21090 at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:56.288805Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:56.288918Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:56.288952Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 received MsgPreVoteResp from b6c76b3131c1024 at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:56.289018Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 [logterm: 2, index: 1880] sent MsgPreVote request to 77557cf66c24e9ff at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:56.289055Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 [logterm: 2, index: 1880] sent MsgPreVote request to c77bbbee62c21090 at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:57.688608Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:57.688647Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:57.688658Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 received MsgPreVoteResp from b6c76b3131c1024 at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:57.688677Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 [logterm: 2, index: 1880] sent MsgPreVote request to 77557cf66c24e9ff at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:57.688686Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 [logterm: 2, index: 1880] sent MsgPreVote request to c77bbbee62c21090 at term 2"}
	{"level":"warn","ts":"2024-07-01T12:23:57.772826Z","caller":"etcdserver/server.go:2089","msg":"failed to publish local member to cluster through raft","local-member-id":"b6c76b3131c1024","local-member-attributes":"{Name:ha-735960 ClientURLs:[https://192.168.39.16:2379]}","request-path":"/0/members/b6c76b3131c1024/attributes","publish-timeout":"7s","error":"etcdserver: request timed out"}
	{"level":"info","ts":"2024-07-01T12:23:59.088585Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:59.088645Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:59.08866Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 received MsgPreVoteResp from b6c76b3131c1024 at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:59.088676Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 [logterm: 2, index: 1880] sent MsgPreVote request to 77557cf66c24e9ff at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:59.088691Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 [logterm: 2, index: 1880] sent MsgPreVote request to c77bbbee62c21090 at term 2"}
	{"level":"warn","ts":"2024-07-01T12:23:59.821067Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c77bbbee62c21090","rtt":"0s","error":"dial tcp 192.168.39.86:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-01T12:23:59.821149Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"c77bbbee62c21090","rtt":"0s","error":"dial tcp 192.168.39.86:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-01T12:23:59.836394Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"77557cf66c24e9ff","rtt":"0s","error":"dial tcp 192.168.39.97:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-07-01T12:23:59.837603Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"77557cf66c24e9ff","rtt":"0s","error":"dial tcp 192.168.39.97:2380: connect: no route to host"}
	
	
	==> etcd [852492f61fee] <==
	{"level":"warn","ts":"2024-07-01T12:26:26.327522Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.97:2380/version","remote-member-id":"77557cf66c24e9ff","error":"Get \"https://192.168.39.97:2380/version\": dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-01T12:26:26.327591Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"77557cf66c24e9ff","error":"Get \"https://192.168.39.97:2380/version\": dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-01T12:26:28.673762Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"77557cf66c24e9ff","rtt":"0s","error":"dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-01T12:26:28.673886Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"77557cf66c24e9ff","rtt":"0s","error":"dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-01T12:26:30.329643Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.97:2380/version","remote-member-id":"77557cf66c24e9ff","error":"Get \"https://192.168.39.97:2380/version\": dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-01T12:26:30.329708Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"77557cf66c24e9ff","error":"Get \"https://192.168.39.97:2380/version\": dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-01T12:26:33.674228Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"77557cf66c24e9ff","rtt":"0s","error":"dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-01T12:26:33.674291Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"77557cf66c24e9ff","rtt":"0s","error":"dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-01T12:26:34.331758Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.97:2380/version","remote-member-id":"77557cf66c24e9ff","error":"Get \"https://192.168.39.97:2380/version\": dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-01T12:26:34.331871Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"77557cf66c24e9ff","error":"Get \"https://192.168.39.97:2380/version\": dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-01T12:26:38.333902Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.97:2380/version","remote-member-id":"77557cf66c24e9ff","error":"Get \"https://192.168.39.97:2380/version\": dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-01T12:26:38.334199Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"77557cf66c24e9ff","error":"Get \"https://192.168.39.97:2380/version\": dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-01T12:26:38.674977Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"77557cf66c24e9ff","rtt":"0s","error":"dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-01T12:26:38.675107Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"77557cf66c24e9ff","rtt":"0s","error":"dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-01T12:26:42.336588Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.97:2380/version","remote-member-id":"77557cf66c24e9ff","error":"Get \"https://192.168.39.97:2380/version\": dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-01T12:26:42.336721Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"77557cf66c24e9ff","error":"Get \"https://192.168.39.97:2380/version\": dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-01T12:26:43.675872Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"77557cf66c24e9ff","rtt":"0s","error":"dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-01T12:26:43.675816Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"77557cf66c24e9ff","rtt":"0s","error":"dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"info","ts":"2024-07-01T12:26:44.691256Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"77557cf66c24e9ff"}
	{"level":"info","ts":"2024-07-01T12:26:44.707815Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b6c76b3131c1024","to":"77557cf66c24e9ff","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-01T12:26:44.707933Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"b6c76b3131c1024","remote-peer-id":"77557cf66c24e9ff"}
	{"level":"info","ts":"2024-07-01T12:26:44.734098Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"b6c76b3131c1024","remote-peer-id":"77557cf66c24e9ff"}
	{"level":"info","ts":"2024-07-01T12:26:44.734341Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b6c76b3131c1024","remote-peer-id":"77557cf66c24e9ff"}
	{"level":"info","ts":"2024-07-01T12:26:44.734943Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b6c76b3131c1024","to":"77557cf66c24e9ff","stream-type":"stream Message"}
	{"level":"info","ts":"2024-07-01T12:26:44.734997Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"b6c76b3131c1024","remote-peer-id":"77557cf66c24e9ff"}
	
	
	==> kernel <==
	 12:27:42 up 3 min,  0 users,  load average: 0.11, 0.16, 0.08
	Linux ha-735960 5.10.207 #1 SMP Wed Jun 26 19:37:34 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [bf788c37e091] <==
	I0701 12:27:06.456938       1 main.go:250] Node ha-735960-m04 has CIDR [10.244.3.0/24] 
	I0701 12:27:16.469806       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0701 12:27:16.469876       1 main.go:227] handling current node
	I0701 12:27:16.469887       1 main.go:223] Handling node with IPs: map[192.168.39.86:{}]
	I0701 12:27:16.469892       1 main.go:250] Node ha-735960-m02 has CIDR [10.244.1.0/24] 
	I0701 12:27:16.470093       1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
	I0701 12:27:16.470154       1 main.go:250] Node ha-735960-m03 has CIDR [10.244.2.0/24] 
	I0701 12:27:16.470277       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0701 12:27:16.470296       1 main.go:250] Node ha-735960-m04 has CIDR [10.244.3.0/24] 
	I0701 12:27:26.489056       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0701 12:27:26.489096       1 main.go:227] handling current node
	I0701 12:27:26.489107       1 main.go:223] Handling node with IPs: map[192.168.39.86:{}]
	I0701 12:27:26.489112       1 main.go:250] Node ha-735960-m02 has CIDR [10.244.1.0/24] 
	I0701 12:27:26.489365       1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
	I0701 12:27:26.489389       1 main.go:250] Node ha-735960-m03 has CIDR [10.244.2.0/24] 
	I0701 12:27:26.489445       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0701 12:27:26.489502       1 main.go:250] Node ha-735960-m04 has CIDR [10.244.3.0/24] 
	I0701 12:27:36.502509       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0701 12:27:36.502721       1 main.go:227] handling current node
	I0701 12:27:36.502867       1 main.go:223] Handling node with IPs: map[192.168.39.86:{}]
	I0701 12:27:36.502957       1 main.go:250] Node ha-735960-m02 has CIDR [10.244.1.0/24] 
	I0701 12:27:36.503231       1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
	I0701 12:27:36.503293       1 main.go:250] Node ha-735960-m03 has CIDR [10.244.2.0/24] 
	I0701 12:27:36.503421       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0701 12:27:36.503550       1 main.go:250] Node ha-735960-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [f472aef5302f] <==
	I0701 12:20:12.428842       1 main.go:250] Node ha-735960-m04 has CIDR [10.244.3.0/24] 
	I0701 12:20:22.443154       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0701 12:20:22.443292       1 main.go:227] handling current node
	I0701 12:20:22.443323       1 main.go:223] Handling node with IPs: map[192.168.39.86:{}]
	I0701 12:20:22.443388       1 main.go:250] Node ha-735960-m02 has CIDR [10.244.1.0/24] 
	I0701 12:20:22.443605       1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
	I0701 12:20:22.443653       1 main.go:250] Node ha-735960-m03 has CIDR [10.244.2.0/24] 
	I0701 12:20:22.443793       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0701 12:20:22.443836       1 main.go:250] Node ha-735960-m04 has CIDR [10.244.3.0/24] 
	I0701 12:20:32.451395       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0701 12:20:32.451431       1 main.go:227] handling current node
	I0701 12:20:32.451481       1 main.go:223] Handling node with IPs: map[192.168.39.86:{}]
	I0701 12:20:32.451486       1 main.go:250] Node ha-735960-m02 has CIDR [10.244.1.0/24] 
	I0701 12:20:32.451947       1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
	I0701 12:20:32.451980       1 main.go:250] Node ha-735960-m03 has CIDR [10.244.2.0/24] 
	I0701 12:20:32.452873       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0701 12:20:32.453015       1 main.go:250] Node ha-735960-m04 has CIDR [10.244.3.0/24] 
	I0701 12:20:42.470169       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0701 12:20:42.470264       1 main.go:227] handling current node
	I0701 12:20:42.470289       1 main.go:223] Handling node with IPs: map[192.168.39.86:{}]
	I0701 12:20:42.470302       1 main.go:250] Node ha-735960-m02 has CIDR [10.244.1.0/24] 
	I0701 12:20:42.470523       1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
	I0701 12:20:42.470616       1 main.go:250] Node ha-735960-m03 has CIDR [10.244.2.0/24] 
	I0701 12:20:42.470868       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0701 12:20:42.470914       1 main.go:250] Node ha-735960-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [8ee3e44a43c3] <==
	I0701 12:25:11.632913       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0701 12:25:11.645811       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0701 12:25:11.645876       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0701 12:25:11.690103       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0701 12:25:11.690292       1 policy_source.go:224] refreshing policies
	I0701 12:25:11.718179       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0701 12:25:11.726917       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0701 12:25:11.729879       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0701 12:25:11.730212       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0701 12:25:11.730238       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0701 12:25:11.737552       1 shared_informer.go:320] Caches are synced for configmaps
	I0701 12:25:11.751625       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0701 12:25:11.752269       1 aggregator.go:165] initial CRD sync complete...
	I0701 12:25:11.752312       1 autoregister_controller.go:141] Starting autoregister controller
	I0701 12:25:11.752319       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0701 12:25:11.752325       1 cache.go:39] Caches are synced for autoregister controller
	I0701 12:25:11.756015       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0701 12:25:11.757180       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0701 12:25:11.779526       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	W0701 12:25:11.807352       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.86]
	I0701 12:25:11.811699       1 controller.go:615] quota admission added evaluator for: endpoints
	I0701 12:25:11.839496       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0701 12:25:11.843047       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0701 12:25:12.631101       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0701 12:25:13.074615       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.16 192.168.39.86]
	
	
	==> kube-apiserver [a3cb59ee8d57] <==
	I0701 12:24:33.660467       1 options.go:221] external host was not specified, using 192.168.39.16
	I0701 12:24:33.670142       1 server.go:148] Version: v1.30.2
	I0701 12:24:33.670491       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0701 12:24:34.296638       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0701 12:24:34.308879       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0701 12:24:34.324179       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0701 12:24:34.324219       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0701 12:24:34.326894       1 instance.go:299] Using reconciler: lease
	W0701 12:24:54.288105       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0701 12:24:54.289911       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0701 12:24:54.328399       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [67dc946c8c45] <==
	I0701 12:25:24.689462       1 shared_informer.go:320] Caches are synced for deployment
	I0701 12:25:24.698997       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0701 12:25:24.699584       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="194.691µs"
	I0701 12:25:24.699894       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="568.701µs"
	I0701 12:25:24.704343       1 shared_informer.go:320] Caches are synced for resource quota
	I0701 12:25:24.710493       1 shared_informer.go:320] Caches are synced for stateful set
	I0701 12:25:24.741914       1 shared_informer.go:320] Caches are synced for resource quota
	I0701 12:25:24.771129       1 shared_informer.go:320] Caches are synced for disruption
	I0701 12:25:24.825005       1 shared_informer.go:320] Caches are synced for persistent volume
	I0701 12:25:25.061636       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.968119ms"
	I0701 12:25:25.061928       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.671µs"
	I0701 12:25:25.231337       1 shared_informer.go:320] Caches are synced for garbage collector
	I0701 12:25:25.278015       1 shared_informer.go:320] Caches are synced for garbage collector
	I0701 12:25:25.278079       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0701 12:25:53.073870       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-735960-m04"
	I0701 12:25:53.162214       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.543735ms"
	I0701 12:25:53.163381       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="162.337µs"
	I0701 12:25:59.557437       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="56.6658ms"
	I0701 12:25:59.558362       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="80.196µs"
	I0701 12:25:59.565576       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-s49dr EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-s49dr\": the object has been modified; please apply your changes to the latest version and try again"
	I0701 12:25:59.566070       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"673ce502-ab01-47a0-ad3e-c33bd402b496", APIVersion:"v1", ResourceVersion:"234", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-s49dr EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-s49dr": the object has been modified; please apply your changes to the latest version and try again
	I0701 12:26:43.750974       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="174.579µs"
	I0701 12:26:47.044231       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.968469ms"
	I0701 12:26:47.047107       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="77.336µs"
	I0701 12:27:30.083176       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-735960-m04"
	
	
	==> kube-controller-manager [ec2c061093f1] <==
	I0701 12:24:33.938262       1 serving.go:380] Generated self-signed cert in-memory
	I0701 12:24:34.667463       1 controllermanager.go:189] "Starting" version="v1.30.2"
	I0701 12:24:34.667501       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0701 12:24:34.670076       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0701 12:24:34.670322       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0701 12:24:34.670888       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0701 12:24:34.671075       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	E0701 12:24:55.336106       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.16:8443/healthz\": dial tcp 192.168.39.16:8443: connect: connection refused"
	
	
	==> kube-proxy [6116abe6039d] <==
	I0701 12:16:09.205590       1 server_linux.go:69] "Using iptables proxy"
	I0701 12:16:09.223098       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.16"]
	I0701 12:16:09.284088       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0701 12:16:09.284134       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0701 12:16:09.284152       1 server_linux.go:165] "Using iptables Proxier"
	I0701 12:16:09.286802       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0701 12:16:09.287240       1 server.go:872] "Version info" version="v1.30.2"
	I0701 12:16:09.287274       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0701 12:16:09.288803       1 config.go:192] "Starting service config controller"
	I0701 12:16:09.288830       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0701 12:16:09.289262       1 config.go:101] "Starting endpoint slice config controller"
	I0701 12:16:09.289283       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0701 12:16:09.290101       1 config.go:319] "Starting node config controller"
	I0701 12:16:09.290125       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0701 12:16:09.389941       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0701 12:16:09.390030       1 shared_informer.go:320] Caches are synced for service config
	I0701 12:16:09.390393       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [710f5c3a9f85] <==
	I0701 12:25:23.858069       1 server_linux.go:69] "Using iptables proxy"
	I0701 12:25:23.875125       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.16"]
	I0701 12:25:23.958416       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0701 12:25:23.958505       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0701 12:25:23.958526       1 server_linux.go:165] "Using iptables Proxier"
	I0701 12:25:23.963079       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0701 12:25:23.963683       1 server.go:872] "Version info" version="v1.30.2"
	I0701 12:25:23.963707       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0701 12:25:23.967807       1 config.go:192] "Starting service config controller"
	I0701 12:25:23.968544       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0701 12:25:23.968625       1 config.go:101] "Starting endpoint slice config controller"
	I0701 12:25:23.968632       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0701 12:25:23.972994       1 config.go:319] "Starting node config controller"
	I0701 12:25:23.973007       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0701 12:25:24.069380       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0701 12:25:24.069565       1 shared_informer.go:320] Caches are synced for service config
	I0701 12:25:24.073577       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2d71437c5f06] <==
	Trace[1766396451]: [10.001227292s] [10.001227292s] END
	E0701 12:23:38.923742       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.16:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	W0701 12:23:40.712171       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.16:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	E0701 12:23:40.712228       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.16:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	W0701 12:23:40.847258       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.16:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:35008->192.168.39.16:8443: read: connection reset by peer
	I0701 12:23:40.847402       1 trace.go:236] Trace[2065780204]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (01-Jul-2024 12:23:30.463) (total time: 10384ms):
	Trace[2065780204]: ---"Objects listed" error:Get "https://192.168.39.16:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:35008->192.168.39.16:8443: read: connection reset by peer 10384ms (12:23:40.847)
	Trace[2065780204]: [10.384136255s] [10.384136255s] END
	E0701 12:23:40.847432       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.16:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:35008->192.168.39.16:8443: read: connection reset by peer
	W0701 12:23:40.847437       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.16:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:35050->192.168.39.16:8443: read: connection reset by peer
	W0701 12:23:40.847259       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.16:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:35028->192.168.39.16:8443: read: connection reset by peer
	E0701 12:23:40.847495       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.16:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:35050->192.168.39.16:8443: read: connection reset by peer
	E0701 12:23:40.847499       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.16:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:35028->192.168.39.16:8443: read: connection reset by peer
	W0701 12:23:40.847682       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.16:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:35066->192.168.39.16:8443: read: connection reset by peer
	E0701 12:23:40.847714       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.16:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:35066->192.168.39.16:8443: read: connection reset by peer
	W0701 12:23:40.848299       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.16:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:35034->192.168.39.16:8443: read: connection reset by peer
	E0701 12:23:40.848357       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.16:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:35034->192.168.39.16:8443: read: connection reset by peer
	W0701 12:23:51.660283       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.16:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	E0701 12:23:51.660337       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.16:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	W0701 12:23:54.252191       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.16:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	E0701 12:23:54.252565       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.16:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	W0701 12:23:55.679907       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.16:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	E0701 12:23:55.680228       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.16:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	E0701 12:24:00.290141       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	E0701 12:24:00.290379       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [693eb0b8f5d7] <==
	W0701 12:25:03.325651       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.16:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	E0701 12:25:03.325717       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.16:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	W0701 12:25:03.469418       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.16:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	E0701 12:25:03.469554       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.16:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	W0701 12:25:03.474242       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.16:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	E0701 12:25:03.474348       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.16:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	W0701 12:25:03.575486       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.16:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	E0701 12:25:03.575608       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.16:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	W0701 12:25:03.691679       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.16:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	E0701 12:25:03.691809       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.16:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	W0701 12:25:05.461372       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.16:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	E0701 12:25:05.461485       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.16:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	W0701 12:25:05.563752       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.16:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	E0701 12:25:05.563793       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.16:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	W0701 12:25:05.636901       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.16:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	E0701 12:25:05.637119       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.16:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	W0701 12:25:11.653758       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0701 12:25:11.654470       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0701 12:25:11.654763       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0701 12:25:11.655634       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0701 12:25:11.655894       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0701 12:25:11.655933       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0701 12:25:11.659133       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0701 12:25:11.659348       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0701 12:25:13.850760       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 01 12:25:13 ha-735960 kubelet[1550]: I0701 12:25:13.105581    1550 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 01 12:25:13 ha-735960 kubelet[1550]: I0701 12:25:13.106791    1550 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 01 12:25:23 ha-735960 kubelet[1550]: I0701 12:25:23.225133    1550 scope.go:117] "RemoveContainer" containerID="769b0b8751350714b3d616a4cb2d06e20a1b7a96e8ac3e8f21b0d653f581e5f0"
	Jul 01 12:25:23 ha-735960 kubelet[1550]: I0701 12:25:23.225251    1550 scope.go:117] "RemoveContainer" containerID="a9c30cd4b3455401ac572f5a7fb2b84cb27956207b4804f80b909a2ccb4c394f"
	Jul 01 12:25:23 ha-735960 kubelet[1550]: I0701 12:25:23.226499    1550 scope.go:117] "RemoveContainer" containerID="6116abe6039dc6c324dce464fa4d85597bcc3455523d4a06be4293c343a9f8f9"
	Jul 01 12:25:24 ha-735960 kubelet[1550]: I0701 12:25:24.225255    1550 scope.go:117] "RemoveContainer" containerID="1ef6d9da6a9c5d6e77ef8d42735bdba288502d231394d299243bc1b669822d1c"
	Jul 01 12:25:25 ha-735960 kubelet[1550]: I0701 12:25:25.225212    1550 scope.go:117] "RemoveContainer" containerID="f472aef5302fd01233da1bd769162654c0b238cb1a3b0c9b24deef221c4821a3"
	Jul 01 12:25:26 ha-735960 kubelet[1550]: I0701 12:25:26.229286    1550 scope.go:117] "RemoveContainer" containerID="97d58c94f3fdcc84b84c3c46e6b25f8e7da118d5c9cd53058ae127fe580a40a7"
	Jul 01 12:25:26 ha-735960 kubelet[1550]: E0701 12:25:26.319340    1550 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 01 12:25:26 ha-735960 kubelet[1550]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 01 12:25:26 ha-735960 kubelet[1550]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 01 12:25:26 ha-735960 kubelet[1550]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 01 12:25:26 ha-735960 kubelet[1550]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 01 12:25:26 ha-735960 kubelet[1550]: I0701 12:25:26.443283    1550 scope.go:117] "RemoveContainer" containerID="14112a4d8f2cb5cfea8813c52de120eeef6fe681ebf589fd8708d1557c35b85f"
	Jul 01 12:25:26 ha-735960 kubelet[1550]: I0701 12:25:26.480472    1550 scope.go:117] "RemoveContainer" containerID="97d58c94f3fdcc84b84c3c46e6b25f8e7da118d5c9cd53058ae127fe580a40a7"
	Jul 01 12:26:26 ha-735960 kubelet[1550]: E0701 12:26:26.244909    1550 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 01 12:26:26 ha-735960 kubelet[1550]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 01 12:26:26 ha-735960 kubelet[1550]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 01 12:26:26 ha-735960 kubelet[1550]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 01 12:26:26 ha-735960 kubelet[1550]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 01 12:27:26 ha-735960 kubelet[1550]: E0701 12:27:26.245316    1550 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 01 12:27:26 ha-735960 kubelet[1550]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 01 12:27:26 ha-735960 kubelet[1550]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 01 12:27:26 ha-735960 kubelet[1550]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 01 12:27:26 ha-735960 kubelet[1550]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-735960 -n ha-735960
helpers_test.go:261: (dbg) Run:  kubectl --context ha-735960 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (3.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (84.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-735960 --control-plane -v=7 --alsologtostderr
E0701 12:28:22.863621  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/addons-877411/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-735960 --control-plane -v=7 --alsologtostderr: (1m20.441899226s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-amd64 -p ha-735960 status -v=7 --alsologtostderr: (1.107061465s)
ha_test.go:616: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-735960 status -v=7 --alsologtostderr": ha-735960
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-735960-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-735960-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-735960-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha-735960-m05
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha_test.go:619: status says not all four hosts are running: args "out/minikube-linux-amd64 -p ha-735960 status -v=7 --alsologtostderr": ha-735960
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-735960-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-735960-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-735960-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha-735960-m05
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha_test.go:622: status says not all four kubelets are running: args "out/minikube-linux-amd64 -p ha-735960 status -v=7 --alsologtostderr": ha-735960
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-735960-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-735960-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-735960-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha-735960-m05
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha_test.go:625: status says not all three apiservers are running: args "out/minikube-linux-amd64 -p ha-735960 status -v=7 --alsologtostderr": ha-735960
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-735960-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-735960-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-735960-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha-735960-m05
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-735960 -n ha-735960
helpers_test.go:244: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-735960 logs -n 25: (1.780140035s)
helpers_test.go:252: TestMultiControlPlane/serial/AddSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-735960 ssh -n                                                                 | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n ha-735960-m04 sudo cat                                          | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | /home/docker/cp-test_ha-735960-m03_ha-735960-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-735960 cp testdata/cp-test.txt                                                | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n                                                                 | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-735960 cp ha-735960-m04:/home/docker/cp-test.txt                              | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2826819896/001/cp-test_ha-735960-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n                                                                 | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-735960 cp ha-735960-m04:/home/docker/cp-test.txt                              | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960:/home/docker/cp-test_ha-735960-m04_ha-735960.txt                       |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n                                                                 | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n ha-735960 sudo cat                                              | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | /home/docker/cp-test_ha-735960-m04_ha-735960.txt                                 |           |         |         |                     |                     |
	| cp      | ha-735960 cp ha-735960-m04:/home/docker/cp-test.txt                              | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m02:/home/docker/cp-test_ha-735960-m04_ha-735960-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n                                                                 | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n ha-735960-m02 sudo cat                                          | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | /home/docker/cp-test_ha-735960-m04_ha-735960-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-735960 cp ha-735960-m04:/home/docker/cp-test.txt                              | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m03:/home/docker/cp-test_ha-735960-m04_ha-735960-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n                                                                 | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n ha-735960-m03 sudo cat                                          | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | /home/docker/cp-test_ha-735960-m04_ha-735960-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-735960 node stop m02 -v=7                                                     | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-735960 node start m02 -v=7                                                    | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:20 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-735960 -v=7                                                           | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:20 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-735960 -v=7                                                                | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:20 UTC | 01 Jul 24 12:21 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-735960 --wait=true -v=7                                                    | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:21 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-735960                                                                | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:22 UTC |                     |
	| node    | ha-735960 node delete m03 -v=7                                                   | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:22 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-735960 stop -v=7                                                              | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:23 UTC | 01 Jul 24 12:24 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-735960 --wait=true                                                         | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:24 UTC | 01 Jul 24 12:27 UTC |
	|         | -v=7 --alsologtostderr                                                           |           |         |         |                     |                     |
	|         | --driver=kvm2                                                                    |           |         |         |                     |                     |
	| node    | add -p ha-735960                                                                 | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:27 UTC | 01 Jul 24 12:29 UTC |
	|         | --control-plane -v=7                                                             |           |         |         |                     |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/01 12:24:02
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0701 12:24:02.565321  653531 out.go:291] Setting OutFile to fd 1 ...
	I0701 12:24:02.565576  653531 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 12:24:02.565584  653531 out.go:304] Setting ErrFile to fd 2...
	I0701 12:24:02.565588  653531 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 12:24:02.565782  653531 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19166-630650/.minikube/bin
	I0701 12:24:02.566304  653531 out.go:298] Setting JSON to false
	I0701 12:24:02.567248  653531 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":7581,"bootTime":1719829062,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0701 12:24:02.567318  653531 start.go:139] virtualization: kvm guest
	I0701 12:24:02.569903  653531 out.go:177] * [ha-735960] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0701 12:24:02.571307  653531 notify.go:220] Checking for updates...
	I0701 12:24:02.571336  653531 out.go:177]   - MINIKUBE_LOCATION=19166
	I0701 12:24:02.572748  653531 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 12:24:02.574111  653531 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19166-630650/kubeconfig
	I0701 12:24:02.575333  653531 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19166-630650/.minikube
	I0701 12:24:02.576670  653531 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0701 12:24:02.578040  653531 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 12:24:02.579691  653531 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:24:02.580063  653531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:24:02.580118  653531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:24:02.595084  653531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46077
	I0701 12:24:02.595523  653531 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:24:02.596065  653531 main.go:141] libmachine: Using API Version  1
	I0701 12:24:02.596090  653531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:24:02.596376  653531 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:24:02.596591  653531 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:24:02.596798  653531 driver.go:392] Setting default libvirt URI to qemu:///system
	I0701 12:24:02.597091  653531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:24:02.597140  653531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:24:02.611685  653531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43891
	I0701 12:24:02.612062  653531 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:24:02.612574  653531 main.go:141] libmachine: Using API Version  1
	I0701 12:24:02.612596  653531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:24:02.612886  653531 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:24:02.613060  653531 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:24:02.647232  653531 out.go:177] * Using the kvm2 driver based on existing profile
	I0701 12:24:02.648606  653531 start.go:297] selected driver: kvm2
	I0701 12:24:02.648624  653531 start.go:901] validating driver "kvm2" against &{Name:ha-735960 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.2 ClusterName:ha-735960 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.60 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storagecla
ss:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 12:24:02.648774  653531 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 12:24:02.649109  653531 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:24:02.649176  653531 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19166-630650/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0701 12:24:02.663726  653531 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0701 12:24:02.664362  653531 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 12:24:02.664394  653531 cni.go:84] Creating CNI manager for ""
	I0701 12:24:02.664400  653531 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0701 12:24:02.664456  653531 start.go:340] cluster config:
	{Name:ha-735960 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-735960 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.60 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false h
elm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 12:24:02.664569  653531 iso.go:125] acquiring lock: {Name:mk5c70910f61bc270c83609c48670eaf9d7e0602 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:24:02.666644  653531 out.go:177] * Starting "ha-735960" primary control-plane node in "ha-735960" cluster
	I0701 12:24:02.667913  653531 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 12:24:02.667956  653531 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19166-630650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0701 12:24:02.667963  653531 cache.go:56] Caching tarball of preloaded images
	I0701 12:24:02.668051  653531 preload.go:173] Found /home/jenkins/minikube-integration/19166-630650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0701 12:24:02.668065  653531 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0701 12:24:02.668178  653531 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
	I0701 12:24:02.668362  653531 start.go:360] acquireMachinesLock for ha-735960: {Name:mk43a6c0c0c15a237623df377ad65b82186c0ad8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:24:02.668420  653531 start.go:364] duration metric: took 37.459µs to acquireMachinesLock for "ha-735960"
	I0701 12:24:02.668440  653531 start.go:96] Skipping create...Using existing machine configuration
	I0701 12:24:02.668454  653531 fix.go:54] fixHost starting: 
	I0701 12:24:02.668711  653531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:24:02.668747  653531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:24:02.682861  653531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39713
	I0701 12:24:02.683321  653531 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:24:02.683791  653531 main.go:141] libmachine: Using API Version  1
	I0701 12:24:02.683812  653531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:24:02.684145  653531 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:24:02.684389  653531 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:24:02.684573  653531 main.go:141] libmachine: (ha-735960) Calling .GetState
	I0701 12:24:02.686019  653531 fix.go:112] recreateIfNeeded on ha-735960: state=Stopped err=<nil>
	I0701 12:24:02.686043  653531 main.go:141] libmachine: (ha-735960) Calling .DriverName
	W0701 12:24:02.686187  653531 fix.go:138] unexpected machine state, will restart: <nil>
	I0701 12:24:02.688339  653531 out.go:177] * Restarting existing kvm2 VM for "ha-735960" ...
	I0701 12:24:02.690004  653531 main.go:141] libmachine: (ha-735960) Calling .Start
	I0701 12:24:02.690210  653531 main.go:141] libmachine: (ha-735960) Ensuring networks are active...
	I0701 12:24:02.690928  653531 main.go:141] libmachine: (ha-735960) Ensuring network default is active
	I0701 12:24:02.691237  653531 main.go:141] libmachine: (ha-735960) Ensuring network mk-ha-735960 is active
	I0701 12:24:02.691618  653531 main.go:141] libmachine: (ha-735960) Getting domain xml...
	I0701 12:24:02.692321  653531 main.go:141] libmachine: (ha-735960) Creating domain...
	I0701 12:24:03.888996  653531 main.go:141] libmachine: (ha-735960) Waiting to get IP...
	I0701 12:24:03.889967  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:03.890480  653531 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:24:03.890588  653531 main.go:141] libmachine: (ha-735960) DBG | I0701 12:24:03.890454  653582 retry.go:31] will retry after 276.532377ms: waiting for machine to come up
	I0701 12:24:04.169193  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:04.169696  653531 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:24:04.169722  653531 main.go:141] libmachine: (ha-735960) DBG | I0701 12:24:04.169655  653582 retry.go:31] will retry after 379.701447ms: waiting for machine to come up
	I0701 12:24:04.551325  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:04.551741  653531 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:24:04.551768  653531 main.go:141] libmachine: (ha-735960) DBG | I0701 12:24:04.551690  653582 retry.go:31] will retry after 390.796114ms: waiting for machine to come up
	I0701 12:24:04.944503  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:04.944879  653531 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:24:04.944907  653531 main.go:141] libmachine: (ha-735960) DBG | I0701 12:24:04.944824  653582 retry.go:31] will retry after 501.242083ms: waiting for machine to come up
	I0701 12:24:05.447754  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:05.448283  653531 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:24:05.448315  653531 main.go:141] libmachine: (ha-735960) DBG | I0701 12:24:05.448261  653582 retry.go:31] will retry after 739.761709ms: waiting for machine to come up
	I0701 12:24:06.189145  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:06.189602  653531 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:24:06.189631  653531 main.go:141] libmachine: (ha-735960) DBG | I0701 12:24:06.189545  653582 retry.go:31] will retry after 652.97975ms: waiting for machine to come up
	I0701 12:24:06.844427  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:06.844894  653531 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:24:06.844917  653531 main.go:141] libmachine: (ha-735960) DBG | I0701 12:24:06.844845  653582 retry.go:31] will retry after 1.122975762s: waiting for machine to come up
	I0701 12:24:07.969893  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:07.970374  653531 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:24:07.970427  653531 main.go:141] libmachine: (ha-735960) DBG | I0701 12:24:07.970304  653582 retry.go:31] will retry after 933.604302ms: waiting for machine to come up
	I0701 12:24:08.905636  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:08.905959  653531 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:24:08.905983  653531 main.go:141] libmachine: (ha-735960) DBG | I0701 12:24:08.905909  653582 retry.go:31] will retry after 1.753153445s: waiting for machine to come up
	I0701 12:24:10.662098  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:10.662553  653531 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:24:10.662622  653531 main.go:141] libmachine: (ha-735960) DBG | I0701 12:24:10.662537  653582 retry.go:31] will retry after 1.625060377s: waiting for machine to come up
	I0701 12:24:12.290368  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:12.290788  653531 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:24:12.290822  653531 main.go:141] libmachine: (ha-735960) DBG | I0701 12:24:12.290695  653582 retry.go:31] will retry after 2.741972388s: waiting for machine to come up
	I0701 12:24:15.036161  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:15.036634  653531 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:24:15.036661  653531 main.go:141] libmachine: (ha-735960) DBG | I0701 12:24:15.036581  653582 retry.go:31] will retry after 3.113034425s: waiting for machine to come up
	I0701 12:24:18.151534  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:18.152048  653531 main.go:141] libmachine: (ha-735960) Found IP for machine: 192.168.39.16
	I0701 12:24:18.152074  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has current primary IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:18.152083  653531 main.go:141] libmachine: (ha-735960) Reserving static IP address...
	I0701 12:24:18.152579  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "ha-735960", mac: "52:54:00:6c:20:7c", ip: "192.168.39.16"} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:18.152611  653531 main.go:141] libmachine: (ha-735960) DBG | skip adding static IP to network mk-ha-735960 - found existing host DHCP lease matching {name: "ha-735960", mac: "52:54:00:6c:20:7c", ip: "192.168.39.16"}
	I0701 12:24:18.152626  653531 main.go:141] libmachine: (ha-735960) Reserved static IP address: 192.168.39.16
	I0701 12:24:18.152643  653531 main.go:141] libmachine: (ha-735960) Waiting for SSH to be available...
	I0701 12:24:18.152674  653531 main.go:141] libmachine: (ha-735960) DBG | Getting to WaitForSSH function...
	I0701 12:24:18.154511  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:18.154741  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:18.154760  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:18.154885  653531 main.go:141] libmachine: (ha-735960) DBG | Using SSH client type: external
	I0701 12:24:18.154912  653531 main.go:141] libmachine: (ha-735960) DBG | Using SSH private key: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa (-rw-------)
	I0701 12:24:18.154954  653531 main.go:141] libmachine: (ha-735960) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.16 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0701 12:24:18.154968  653531 main.go:141] libmachine: (ha-735960) DBG | About to run SSH command:
	I0701 12:24:18.154991  653531 main.go:141] libmachine: (ha-735960) DBG | exit 0
	I0701 12:24:18.274220  653531 main.go:141] libmachine: (ha-735960) DBG | SSH cmd err, output: <nil>: 
	I0701 12:24:18.274677  653531 main.go:141] libmachine: (ha-735960) Calling .GetConfigRaw
	I0701 12:24:18.275344  653531 main.go:141] libmachine: (ha-735960) Calling .GetIP
	I0701 12:24:18.277628  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:18.278085  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:18.278118  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:18.278447  653531 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
	I0701 12:24:18.278671  653531 machine.go:94] provisionDockerMachine start ...
	I0701 12:24:18.278694  653531 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:24:18.278956  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:24:18.281138  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:18.281565  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:18.281590  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:18.281697  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:24:18.281884  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:18.282084  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:18.282290  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:24:18.282484  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:24:18.282777  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0701 12:24:18.282790  653531 main.go:141] libmachine: About to run SSH command:
	hostname
	I0701 12:24:18.378249  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0701 12:24:18.378279  653531 main.go:141] libmachine: (ha-735960) Calling .GetMachineName
	I0701 12:24:18.378583  653531 buildroot.go:166] provisioning hostname "ha-735960"
	I0701 12:24:18.378614  653531 main.go:141] libmachine: (ha-735960) Calling .GetMachineName
	I0701 12:24:18.378869  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:24:18.381421  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:18.381789  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:18.381817  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:18.381949  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:24:18.382158  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:18.382297  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:18.382445  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:24:18.382576  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:24:18.382763  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0701 12:24:18.382780  653531 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-735960 && echo "ha-735960" | sudo tee /etc/hostname
	I0701 12:24:18.491369  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-735960
	
	I0701 12:24:18.491396  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:24:18.494039  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:18.494432  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:18.494460  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:18.494718  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:24:18.494939  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:18.495106  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:18.495259  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:24:18.495452  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:24:18.495675  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0701 12:24:18.495699  653531 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-735960' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-735960/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-735960' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0701 12:24:18.598595  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0701 12:24:18.598631  653531 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19166-630650/.minikube CaCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19166-630650/.minikube}
	I0701 12:24:18.598653  653531 buildroot.go:174] setting up certificates
	I0701 12:24:18.598662  653531 provision.go:84] configureAuth start
	I0701 12:24:18.598670  653531 main.go:141] libmachine: (ha-735960) Calling .GetMachineName
	I0701 12:24:18.598968  653531 main.go:141] libmachine: (ha-735960) Calling .GetIP
	I0701 12:24:18.601563  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:18.602005  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:18.602036  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:18.602215  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:24:18.604739  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:18.605246  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:18.605273  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:18.605427  653531 provision.go:143] copyHostCerts
	I0701 12:24:18.605458  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem
	I0701 12:24:18.605515  653531 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem, removing ...
	I0701 12:24:18.605523  653531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem
	I0701 12:24:18.605588  653531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem (1675 bytes)
	I0701 12:24:18.605671  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem
	I0701 12:24:18.605688  653531 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem, removing ...
	I0701 12:24:18.605695  653531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem
	I0701 12:24:18.605718  653531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem (1078 bytes)
	I0701 12:24:18.605772  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem
	I0701 12:24:18.605788  653531 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem, removing ...
	I0701 12:24:18.605794  653531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem
	I0701 12:24:18.605814  653531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem (1123 bytes)
	I0701 12:24:18.605871  653531 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem org=jenkins.ha-735960 san=[127.0.0.1 192.168.39.16 ha-735960 localhost minikube]
	I0701 12:24:19.079576  653531 provision.go:177] copyRemoteCerts
	I0701 12:24:19.079661  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0701 12:24:19.079696  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:24:19.082253  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:19.082610  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:19.082638  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:19.082786  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:24:19.082996  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:19.083179  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:24:19.083325  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa Username:docker}
	I0701 12:24:19.160543  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0701 12:24:19.160634  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0701 12:24:19.183871  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0701 12:24:19.183957  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0701 12:24:19.206811  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0701 12:24:19.206911  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0701 12:24:19.229160  653531 provision.go:87] duration metric: took 630.48062ms to configureAuth
	I0701 12:24:19.229197  653531 buildroot.go:189] setting minikube options for container-runtime
	I0701 12:24:19.229480  653531 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:24:19.229521  653531 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:24:19.229827  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:24:19.232595  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:19.233032  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:19.233062  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:19.233264  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:24:19.233514  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:19.233696  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:19.233834  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:24:19.234025  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:24:19.234222  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0701 12:24:19.234237  653531 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0701 12:24:19.331417  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0701 12:24:19.331446  653531 buildroot.go:70] root file system type: tmpfs
	I0701 12:24:19.331582  653531 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0701 12:24:19.331605  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:24:19.334269  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:19.334634  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:19.334660  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:19.334900  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:24:19.335107  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:19.335308  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:19.335479  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:24:19.335645  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:24:19.335809  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0701 12:24:19.335865  653531 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0701 12:24:19.443562  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0701 12:24:19.443592  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:24:19.446176  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:19.446524  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:19.446556  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:19.446723  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:24:19.446930  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:19.447105  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:19.447245  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:24:19.447408  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:24:19.447591  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0701 12:24:19.447611  653531 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0701 12:24:21.232310  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0701 12:24:21.232343  653531 machine.go:97] duration metric: took 2.953656212s to provisionDockerMachine
	I0701 12:24:21.232359  653531 start.go:293] postStartSetup for "ha-735960" (driver="kvm2")
	I0701 12:24:21.232371  653531 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0701 12:24:21.232390  653531 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:24:21.232744  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0701 12:24:21.232777  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:24:21.235119  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:21.235559  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:21.235584  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:21.235772  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:24:21.235940  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:21.236122  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:24:21.236248  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa Username:docker}
	I0701 12:24:21.313134  653531 ssh_runner.go:195] Run: cat /etc/os-release
	I0701 12:24:21.317084  653531 info.go:137] Remote host: Buildroot 2023.02.9
	I0701 12:24:21.317118  653531 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-630650/.minikube/addons for local assets ...
	I0701 12:24:21.317202  653531 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-630650/.minikube/files for local assets ...
	I0701 12:24:21.317295  653531 filesync.go:149] local asset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> 6378542.pem in /etc/ssl/certs
	I0701 12:24:21.317307  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> /etc/ssl/certs/6378542.pem
	I0701 12:24:21.317399  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0701 12:24:21.326681  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem --> /etc/ssl/certs/6378542.pem (1708 bytes)
	I0701 12:24:21.349306  653531 start.go:296] duration metric: took 116.926386ms for postStartSetup
	I0701 12:24:21.349360  653531 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:24:21.349703  653531 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0701 12:24:21.349739  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:24:21.352499  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:21.352917  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:21.352946  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:21.353148  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:24:21.353394  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:21.353561  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:24:21.353790  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa Username:docker}
	I0701 12:24:21.433784  653531 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0701 12:24:21.433859  653531 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0701 12:24:21.475659  653531 fix.go:56] duration metric: took 18.807194904s for fixHost
	I0701 12:24:21.475706  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:24:21.478623  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:21.479038  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:21.479071  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:21.479250  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:24:21.479467  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:21.479584  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:21.479702  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:24:21.479838  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:24:21.480034  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0701 12:24:21.480048  653531 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0701 12:24:21.586741  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719836661.563256683
	
	I0701 12:24:21.586770  653531 fix.go:216] guest clock: 1719836661.563256683
	I0701 12:24:21.586783  653531 fix.go:229] Guest: 2024-07-01 12:24:21.563256683 +0000 UTC Remote: 2024-07-01 12:24:21.475685785 +0000 UTC m=+18.945537438 (delta=87.570898ms)
	I0701 12:24:21.586836  653531 fix.go:200] guest clock delta is within tolerance: 87.570898ms
	I0701 12:24:21.586844  653531 start.go:83] releasing machines lock for "ha-735960", held for 18.918411663s
	I0701 12:24:21.586868  653531 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:24:21.587158  653531 main.go:141] libmachine: (ha-735960) Calling .GetIP
	I0701 12:24:21.589666  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:21.590034  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:21.590064  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:21.590216  653531 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:24:21.590761  653531 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:24:21.590954  653531 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:24:21.591048  653531 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0701 12:24:21.591096  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:24:21.591207  653531 ssh_runner.go:195] Run: cat /version.json
	I0701 12:24:21.591235  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:24:21.593711  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:21.593857  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:21.594066  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:21.594091  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:21.594278  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:24:21.594408  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:21.594432  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:21.594491  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:21.594596  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:24:21.594674  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:24:21.594780  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:21.594865  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa Username:docker}
	I0701 12:24:21.594903  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:24:21.595018  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa Username:docker}
	I0701 12:24:21.688196  653531 ssh_runner.go:195] Run: systemctl --version
	I0701 12:24:21.693743  653531 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0701 12:24:21.698823  653531 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0701 12:24:21.698901  653531 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0701 12:24:21.714364  653531 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0701 12:24:21.714404  653531 start.go:494] detecting cgroup driver to use...
	I0701 12:24:21.714572  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 12:24:21.734692  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0701 12:24:21.744599  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0701 12:24:21.754591  653531 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0701 12:24:21.754664  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0701 12:24:21.764718  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0701 12:24:21.774564  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0701 12:24:21.784516  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0701 12:24:21.794592  653531 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0701 12:24:21.804646  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0701 12:24:21.814497  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0701 12:24:21.824363  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0701 12:24:21.834566  653531 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0701 12:24:21.843852  653531 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0701 12:24:21.852939  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:24:21.959107  653531 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0701 12:24:21.981473  653531 start.go:494] detecting cgroup driver to use...
	I0701 12:24:21.981556  653531 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0701 12:24:21.995383  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 12:24:22.009843  653531 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0701 12:24:22.030755  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 12:24:22.043208  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 12:24:22.055774  653531 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0701 12:24:22.080888  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 12:24:22.093331  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 12:24:22.110088  653531 ssh_runner.go:195] Run: which cri-dockerd
	I0701 12:24:22.113487  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0701 12:24:22.121907  653531 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0701 12:24:22.137227  653531 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0701 12:24:22.245438  653531 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0701 12:24:22.351994  653531 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0701 12:24:22.352150  653531 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0701 12:24:22.368109  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:24:22.474388  653531 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0701 12:24:24.887396  653531 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.412956412s)
	I0701 12:24:24.887487  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0701 12:24:24.900113  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0701 12:24:24.912702  653531 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0701 12:24:25.020545  653531 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0701 12:24:25.134056  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:24:25.242294  653531 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0701 12:24:25.258251  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0701 12:24:25.270762  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:24:25.375199  653531 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0701 12:24:25.454939  653531 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0701 12:24:25.455020  653531 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0701 12:24:25.460209  653531 start.go:562] Will wait 60s for crictl version
	I0701 12:24:25.460266  653531 ssh_runner.go:195] Run: which crictl
	I0701 12:24:25.463721  653531 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0701 12:24:25.498358  653531 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.1
	RuntimeApiVersion:  v1
	I0701 12:24:25.498453  653531 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0701 12:24:25.525766  653531 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0701 12:24:25.549708  653531 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.1 ...
	I0701 12:24:25.549757  653531 main.go:141] libmachine: (ha-735960) Calling .GetIP
	I0701 12:24:25.552699  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:25.553097  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:25.553132  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:25.553374  653531 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0701 12:24:25.557331  653531 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 12:24:25.569653  653531 kubeadm.go:877] updating cluster {Name:ha-735960 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cl
usterName:ha-735960 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.60 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:fa
lse freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0701 12:24:25.569810  653531 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 12:24:25.569866  653531 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0701 12:24:25.593428  653531 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	kindest/kindnetd:v20240513-cd2ac642
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0701 12:24:25.593450  653531 docker.go:615] Images already preloaded, skipping extraction
	I0701 12:24:25.593535  653531 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0701 12:24:25.613507  653531 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	kindest/kindnetd:v20240513-cd2ac642
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0701 12:24:25.613542  653531 cache_images.go:84] Images are preloaded, skipping loading
	I0701 12:24:25.613557  653531 kubeadm.go:928] updating node { 192.168.39.16 8443 v1.30.2 docker true true} ...
	I0701 12:24:25.613677  653531 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-735960 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.16
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-735960 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0701 12:24:25.613736  653531 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0701 12:24:25.636959  653531 cni.go:84] Creating CNI manager for ""
	I0701 12:24:25.636987  653531 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0701 12:24:25.637001  653531 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0701 12:24:25.637033  653531 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.16 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-735960 NodeName:ha-735960 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.16"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.16 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0701 12:24:25.637207  653531 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.16
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-735960"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.16
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.16"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0701 12:24:25.637234  653531 kube-vip.go:115] generating kube-vip config ...
	I0701 12:24:25.637291  653531 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0701 12:24:25.651059  653531 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0701 12:24:25.651192  653531 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0701 12:24:25.651261  653531 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0701 12:24:25.660952  653531 binaries.go:44] Found k8s binaries, skipping transfer
	I0701 12:24:25.661049  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0701 12:24:25.669901  653531 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0701 12:24:25.685801  653531 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0701 12:24:25.701259  653531 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0701 12:24:25.717237  653531 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0701 12:24:25.732682  653531 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0701 12:24:25.736549  653531 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 12:24:25.748348  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:24:25.857797  653531 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0701 12:24:25.874307  653531 certs.go:68] Setting up /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960 for IP: 192.168.39.16
	I0701 12:24:25.874340  653531 certs.go:194] generating shared ca certs ...
	I0701 12:24:25.874365  653531 certs.go:226] acquiring lock for ca certs: {Name:mk34e166bfd069e523b2325e14d1812c523bff53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:24:25.874584  653531 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key
	I0701 12:24:25.874645  653531 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key
	I0701 12:24:25.874659  653531 certs.go:256] generating profile certs ...
	I0701 12:24:25.874733  653531 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/client.key
	I0701 12:24:25.874814  653531 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key.5c21f4af
	I0701 12:24:25.874868  653531 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.key
	I0701 12:24:25.874883  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0701 12:24:25.874918  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0701 12:24:25.874937  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0701 12:24:25.874955  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0701 12:24:25.874972  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0701 12:24:25.874991  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0701 12:24:25.875008  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0701 12:24:25.875025  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0701 12:24:25.875093  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem (1338 bytes)
	W0701 12:24:25.875146  653531 certs.go:480] ignoring /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854_empty.pem, impossibly tiny 0 bytes
	I0701 12:24:25.875161  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem (1675 bytes)
	I0701 12:24:25.875193  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem (1078 bytes)
	I0701 12:24:25.875224  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem (1123 bytes)
	I0701 12:24:25.875261  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem (1675 bytes)
	I0701 12:24:25.875343  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem (1708 bytes)
	I0701 12:24:25.875386  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> /usr/share/ca-certificates/6378542.pem
	I0701 12:24:25.875409  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:24:25.875426  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem -> /usr/share/ca-certificates/637854.pem
	I0701 12:24:25.876083  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0701 12:24:25.910761  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0701 12:24:25.938480  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0701 12:24:25.963281  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0701 12:24:25.989413  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0701 12:24:26.015055  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0701 12:24:26.039406  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0701 12:24:26.062955  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0701 12:24:26.093960  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem --> /usr/share/ca-certificates/6378542.pem (1708 bytes)
	I0701 12:24:26.125896  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0701 12:24:26.156031  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem --> /usr/share/ca-certificates/637854.pem (1338 bytes)
	I0701 12:24:26.181375  653531 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0701 12:24:26.209470  653531 ssh_runner.go:195] Run: openssl version
	I0701 12:24:26.218386  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6378542.pem && ln -fs /usr/share/ca-certificates/6378542.pem /etc/ssl/certs/6378542.pem"
	I0701 12:24:26.233243  653531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6378542.pem
	I0701 12:24:26.241811  653531 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  1 12:11 /usr/share/ca-certificates/6378542.pem
	I0701 12:24:26.241888  653531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6378542.pem
	I0701 12:24:26.250559  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6378542.pem /etc/ssl/certs/3ec20f2e.0"
	I0701 12:24:26.277768  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0701 12:24:26.305594  653531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:24:26.315685  653531 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  1 12:05 /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:24:26.315763  653531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:24:26.330923  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0701 12:24:26.351095  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/637854.pem && ln -fs /usr/share/ca-certificates/637854.pem /etc/ssl/certs/637854.pem"
	I0701 12:24:26.374355  653531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/637854.pem
	I0701 12:24:26.380759  653531 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  1 12:11 /usr/share/ca-certificates/637854.pem
	I0701 12:24:26.380836  653531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/637854.pem
	I0701 12:24:26.392584  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/637854.pem /etc/ssl/certs/51391683.0"
	I0701 12:24:26.411160  653531 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0701 12:24:26.419483  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0701 12:24:26.437558  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0701 12:24:26.444826  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0701 12:24:26.454628  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0701 12:24:26.467473  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0701 12:24:26.476039  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0701 12:24:26.482296  653531 kubeadm.go:391] StartCluster: {Name:ha-735960 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-735960 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.60 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false
freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 12:24:26.482508  653531 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0701 12:24:26.498609  653531 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0701 12:24:26.509374  653531 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0701 12:24:26.509403  653531 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0701 12:24:26.509410  653531 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0701 12:24:26.509466  653531 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0701 12:24:26.518865  653531 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0701 12:24:26.519310  653531 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-735960" does not appear in /home/jenkins/minikube-integration/19166-630650/kubeconfig
	I0701 12:24:26.519460  653531 kubeconfig.go:62] /home/jenkins/minikube-integration/19166-630650/kubeconfig needs updating (will repair): [kubeconfig missing "ha-735960" cluster setting kubeconfig missing "ha-735960" context setting]
	I0701 12:24:26.519772  653531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19166-630650/kubeconfig: {Name:mke3ef9d019eff4edd273b00c416fd77ed009242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:24:26.520253  653531 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19166-630650/kubeconfig
	I0701 12:24:26.520566  653531 kapi.go:59] client config for ha-735960: &rest.Config{Host:"https://192.168.39.16:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/client.crt", KeyFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/client.key", CAFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(ni
l)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfbb80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0701 12:24:26.521041  653531 cert_rotation.go:137] Starting client certificate rotation controller
	I0701 12:24:26.521235  653531 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0701 12:24:26.530555  653531 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.16
	I0701 12:24:26.530586  653531 kubeadm.go:591] duration metric: took 21.167521ms to restartPrimaryControlPlane
	I0701 12:24:26.530596  653531 kubeadm.go:393] duration metric: took 48.31583ms to StartCluster
	I0701 12:24:26.530618  653531 settings.go:142] acquiring lock: {Name:mk6f7c85ea77a73ff0ac851454721f2e6e309153 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:24:26.530700  653531 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19166-630650/kubeconfig
	I0701 12:24:26.531272  653531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19166-630650/kubeconfig: {Name:mke3ef9d019eff4edd273b00c416fd77ed009242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:24:26.531528  653531 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:24:26.531554  653531 start.go:240] waiting for startup goroutines ...
	I0701 12:24:26.531572  653531 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0701 12:24:26.531767  653531 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:24:26.534496  653531 out.go:177] * Enabled addons: 
	I0701 12:24:26.535873  653531 addons.go:510] duration metric: took 4.304011ms for enable addons: enabled=[]
	I0701 12:24:26.535915  653531 start.go:245] waiting for cluster config update ...
	I0701 12:24:26.535925  653531 start.go:254] writing updated cluster config ...
	I0701 12:24:26.537498  653531 out.go:177] 
	I0701 12:24:26.539211  653531 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:24:26.539336  653531 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
	I0701 12:24:26.541509  653531 out.go:177] * Starting "ha-735960-m02" control-plane node in "ha-735960" cluster
	I0701 12:24:26.542802  653531 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 12:24:26.542833  653531 cache.go:56] Caching tarball of preloaded images
	I0701 12:24:26.542967  653531 preload.go:173] Found /home/jenkins/minikube-integration/19166-630650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0701 12:24:26.542983  653531 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0701 12:24:26.543093  653531 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
	I0701 12:24:26.543293  653531 start.go:360] acquireMachinesLock for ha-735960-m02: {Name:mk43a6c0c0c15a237623df377ad65b82186c0ad8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:24:26.543355  653531 start.go:364] duration metric: took 39.786µs to acquireMachinesLock for "ha-735960-m02"
	I0701 12:24:26.543382  653531 start.go:96] Skipping create...Using existing machine configuration
	I0701 12:24:26.543393  653531 fix.go:54] fixHost starting: m02
	I0701 12:24:26.543665  653531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:24:26.543694  653531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:24:26.558741  653531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34329
	I0701 12:24:26.559300  653531 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:24:26.559767  653531 main.go:141] libmachine: Using API Version  1
	I0701 12:24:26.559790  653531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:24:26.560107  653531 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:24:26.560324  653531 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:24:26.560471  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetState
	I0701 12:24:26.561903  653531 fix.go:112] recreateIfNeeded on ha-735960-m02: state=Stopped err=<nil>
	I0701 12:24:26.561933  653531 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	W0701 12:24:26.562104  653531 fix.go:138] unexpected machine state, will restart: <nil>
	I0701 12:24:26.564118  653531 out.go:177] * Restarting existing kvm2 VM for "ha-735960-m02" ...
	I0701 12:24:26.565547  653531 main.go:141] libmachine: (ha-735960-m02) Calling .Start
	I0701 12:24:26.565742  653531 main.go:141] libmachine: (ha-735960-m02) Ensuring networks are active...
	I0701 12:24:26.566439  653531 main.go:141] libmachine: (ha-735960-m02) Ensuring network default is active
	I0701 12:24:26.566739  653531 main.go:141] libmachine: (ha-735960-m02) Ensuring network mk-ha-735960 is active
	I0701 12:24:26.567095  653531 main.go:141] libmachine: (ha-735960-m02) Getting domain xml...
	I0701 12:24:26.567681  653531 main.go:141] libmachine: (ha-735960-m02) Creating domain...
	I0701 12:24:27.772734  653531 main.go:141] libmachine: (ha-735960-m02) Waiting to get IP...
	I0701 12:24:27.773478  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:27.773801  653531 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:24:27.773853  653531 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:24:27.773777  653719 retry.go:31] will retry after 217.058414ms: waiting for machine to come up
	I0701 12:24:27.992187  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:27.992715  653531 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:24:27.992745  653531 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:24:27.992653  653719 retry.go:31] will retry after 295.156992ms: waiting for machine to come up
	I0701 12:24:28.289101  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:28.289597  653531 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:24:28.289630  653531 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:24:28.289531  653719 retry.go:31] will retry after 353.406325ms: waiting for machine to come up
	I0701 12:24:28.644006  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:28.644479  653531 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:24:28.644510  653531 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:24:28.644437  653719 retry.go:31] will retry after 398.224689ms: waiting for machine to come up
	I0701 12:24:29.044072  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:29.044514  653531 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:24:29.044545  653531 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:24:29.044461  653719 retry.go:31] will retry after 547.020131ms: waiting for machine to come up
	I0701 12:24:29.593264  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:29.593690  653531 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:24:29.593709  653531 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:24:29.593653  653719 retry.go:31] will retry after 787.756844ms: waiting for machine to come up
	I0701 12:24:30.382731  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:30.383180  653531 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:24:30.383209  653531 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:24:30.383137  653719 retry.go:31] will retry after 870.067991ms: waiting for machine to come up
	I0701 12:24:31.254672  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:31.255252  653531 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:24:31.255285  653531 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:24:31.255205  653719 retry.go:31] will retry after 1.371479719s: waiting for machine to come up
	I0701 12:24:32.628605  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:32.629092  653531 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:24:32.629124  653531 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:24:32.629036  653719 retry.go:31] will retry after 1.347043223s: waiting for machine to come up
	I0701 12:24:33.978739  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:33.979246  653531 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:24:33.979275  653531 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:24:33.979195  653719 retry.go:31] will retry after 2.257830197s: waiting for machine to come up
	I0701 12:24:36.239828  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:36.240400  653531 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:24:36.240433  653531 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:24:36.240355  653719 retry.go:31] will retry after 2.834526493s: waiting for machine to come up
	I0701 12:24:39.078121  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:39.078416  653531 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:24:39.078448  653531 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:24:39.078379  653719 retry.go:31] will retry after 2.465969863s: waiting for machine to come up
	I0701 12:24:41.547043  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:41.547535  653531 main.go:141] libmachine: (ha-735960-m02) Found IP for machine: 192.168.39.86
	I0701 12:24:41.547569  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has current primary IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:41.547579  653531 main.go:141] libmachine: (ha-735960-m02) Reserving static IP address...
	I0701 12:24:41.547991  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "ha-735960-m02", mac: "52:54:00:0b:2f:ce", ip: "192.168.39.86"} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:41.548015  653531 main.go:141] libmachine: (ha-735960-m02) Reserved static IP address: 192.168.39.86
	I0701 12:24:41.548032  653531 main.go:141] libmachine: (ha-735960-m02) DBG | skip adding static IP to network mk-ha-735960 - found existing host DHCP lease matching {name: "ha-735960-m02", mac: "52:54:00:0b:2f:ce", ip: "192.168.39.86"}
	I0701 12:24:41.548045  653531 main.go:141] libmachine: (ha-735960-m02) DBG | Getting to WaitForSSH function...
	I0701 12:24:41.548059  653531 main.go:141] libmachine: (ha-735960-m02) Waiting for SSH to be available...
	I0701 12:24:41.550171  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:41.550523  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:41.550552  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:41.550644  653531 main.go:141] libmachine: (ha-735960-m02) DBG | Using SSH client type: external
	I0701 12:24:41.550670  653531 main.go:141] libmachine: (ha-735960-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa (-rw-------)
	I0701 12:24:41.550719  653531 main.go:141] libmachine: (ha-735960-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.86 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0701 12:24:41.550739  653531 main.go:141] libmachine: (ha-735960-m02) DBG | About to run SSH command:
	I0701 12:24:41.550754  653531 main.go:141] libmachine: (ha-735960-m02) DBG | exit 0
	I0701 12:24:41.678305  653531 main.go:141] libmachine: (ha-735960-m02) DBG | SSH cmd err, output: <nil>: 
	I0701 12:24:41.678691  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetConfigRaw
	I0701 12:24:41.679334  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetIP
	I0701 12:24:41.682006  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:41.682508  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:41.682540  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:41.682792  653531 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
	I0701 12:24:41.683005  653531 machine.go:94] provisionDockerMachine start ...
	I0701 12:24:41.683030  653531 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:24:41.683290  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:24:41.685599  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:41.685951  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:41.685979  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:41.686153  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:24:41.686378  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:41.686551  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:41.686684  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:24:41.686822  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:24:41.687030  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0701 12:24:41.687043  653531 main.go:141] libmachine: About to run SSH command:
	hostname
	I0701 12:24:41.802622  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0701 12:24:41.802657  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetMachineName
	I0701 12:24:41.802940  653531 buildroot.go:166] provisioning hostname "ha-735960-m02"
	I0701 12:24:41.802963  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetMachineName
	I0701 12:24:41.803281  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:24:41.805937  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:41.806443  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:41.806470  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:41.806608  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:24:41.806785  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:41.807003  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:41.807154  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:24:41.807371  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:24:41.807554  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0701 12:24:41.807567  653531 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-735960-m02 && echo "ha-735960-m02" | sudo tee /etc/hostname
	I0701 12:24:41.938306  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-735960-m02
	
	I0701 12:24:41.938353  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:24:41.941077  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:41.941535  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:41.941592  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:41.941765  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:24:41.941994  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:41.942161  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:41.942290  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:24:41.942491  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:24:41.942676  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0701 12:24:41.942701  653531 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-735960-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-735960-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-735960-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0701 12:24:42.062715  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0701 12:24:42.062750  653531 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19166-630650/.minikube CaCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19166-630650/.minikube}
	I0701 12:24:42.062772  653531 buildroot.go:174] setting up certificates
	I0701 12:24:42.062785  653531 provision.go:84] configureAuth start
	I0701 12:24:42.062795  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetMachineName
	I0701 12:24:42.063134  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetIP
	I0701 12:24:42.065907  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:42.066246  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:42.066279  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:42.066490  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:24:42.068450  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:42.068818  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:42.068843  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:42.068957  653531 provision.go:143] copyHostCerts
	I0701 12:24:42.068988  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem
	I0701 12:24:42.069022  653531 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem, removing ...
	I0701 12:24:42.069030  653531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem
	I0701 12:24:42.069082  653531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem (1078 bytes)
	I0701 12:24:42.069156  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem
	I0701 12:24:42.069173  653531 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem, removing ...
	I0701 12:24:42.069180  653531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem
	I0701 12:24:42.069199  653531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem (1123 bytes)
	I0701 12:24:42.069241  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem
	I0701 12:24:42.069257  653531 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem, removing ...
	I0701 12:24:42.069263  653531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem
	I0701 12:24:42.069279  653531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem (1675 bytes)
	I0701 12:24:42.069326  653531 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem org=jenkins.ha-735960-m02 san=[127.0.0.1 192.168.39.86 ha-735960-m02 localhost minikube]
	I0701 12:24:42.315961  653531 provision.go:177] copyRemoteCerts
	I0701 12:24:42.316035  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0701 12:24:42.316061  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:24:42.318992  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:42.319361  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:42.319395  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:42.319557  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:24:42.319740  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:42.319969  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:24:42.320092  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa Username:docker}
	I0701 12:24:42.408924  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0701 12:24:42.408999  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0701 12:24:42.434942  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0701 12:24:42.435038  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0701 12:24:42.458628  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0701 12:24:42.458728  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0701 12:24:42.482505  653531 provision.go:87] duration metric: took 419.705556ms to configureAuth
	I0701 12:24:42.482536  653531 buildroot.go:189] setting minikube options for container-runtime
	I0701 12:24:42.482760  653531 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:24:42.482797  653531 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:24:42.483103  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:24:42.485829  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:42.486249  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:42.486277  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:42.486574  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:24:42.486846  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:42.487031  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:42.487211  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:24:42.487420  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:24:42.487596  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0701 12:24:42.487608  653531 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0701 12:24:42.603937  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0701 12:24:42.603962  653531 buildroot.go:70] root file system type: tmpfs
	I0701 12:24:42.604101  653531 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0701 12:24:42.604123  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:24:42.606937  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:42.607326  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:42.607351  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:42.607512  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:24:42.607762  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:42.607935  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:42.608131  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:24:42.608318  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:24:42.608490  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0701 12:24:42.608578  653531 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.39.16"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0701 12:24:42.731927  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.39.16
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0701 12:24:42.731963  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:24:42.735092  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:42.735552  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:42.735586  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:42.735721  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:24:42.735916  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:42.736097  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:42.736226  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:24:42.736425  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:24:42.736596  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0701 12:24:42.736613  653531 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0701 12:24:44.641546  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0701 12:24:44.641584  653531 machine.go:97] duration metric: took 2.958558644s to provisionDockerMachine
	I0701 12:24:44.641601  653531 start.go:293] postStartSetup for "ha-735960-m02" (driver="kvm2")
	I0701 12:24:44.641615  653531 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0701 12:24:44.641637  653531 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:24:44.642004  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0701 12:24:44.642040  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:24:44.645224  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:44.645706  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:44.645738  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:44.645868  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:24:44.646053  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:44.646222  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:24:44.646376  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa Username:docker}
	I0701 12:24:44.736407  653531 ssh_runner.go:195] Run: cat /etc/os-release
	I0701 12:24:44.740656  653531 info.go:137] Remote host: Buildroot 2023.02.9
	I0701 12:24:44.740682  653531 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-630650/.minikube/addons for local assets ...
	I0701 12:24:44.740758  653531 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-630650/.minikube/files for local assets ...
	I0701 12:24:44.740835  653531 filesync.go:149] local asset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> 6378542.pem in /etc/ssl/certs
	I0701 12:24:44.740848  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> /etc/ssl/certs/6378542.pem
	I0701 12:24:44.740945  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0701 12:24:44.749928  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem --> /etc/ssl/certs/6378542.pem (1708 bytes)
	I0701 12:24:44.772391  653531 start.go:296] duration metric: took 130.772957ms for postStartSetup
	I0701 12:24:44.772467  653531 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:24:44.772787  653531 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0701 12:24:44.772824  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:24:44.775217  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:44.775582  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:44.775607  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:44.775804  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:24:44.776027  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:44.776203  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:24:44.776383  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa Username:docker}
	I0701 12:24:44.864587  653531 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0701 12:24:44.864665  653531 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0701 12:24:44.904439  653531 fix.go:56] duration metric: took 18.361036234s for fixHost
	I0701 12:24:44.904495  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:24:44.907382  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:44.907911  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:44.907944  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:44.908260  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:24:44.908504  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:44.908689  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:44.908847  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:24:44.909036  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:24:44.909257  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0701 12:24:44.909273  653531 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0701 12:24:45.022815  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719836684.998547011
	
	I0701 12:24:45.022845  653531 fix.go:216] guest clock: 1719836684.998547011
	I0701 12:24:45.022855  653531 fix.go:229] Guest: 2024-07-01 12:24:44.998547011 +0000 UTC Remote: 2024-07-01 12:24:44.904469964 +0000 UTC m=+42.374321626 (delta=94.077047ms)
	I0701 12:24:45.022878  653531 fix.go:200] guest clock delta is within tolerance: 94.077047ms
	I0701 12:24:45.022885  653531 start.go:83] releasing machines lock for "ha-735960-m02", held for 18.479517819s
	I0701 12:24:45.022904  653531 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:24:45.023158  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetIP
	I0701 12:24:45.025946  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:45.026429  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:45.026468  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:45.028669  653531 out.go:177] * Found network options:
	I0701 12:24:45.030344  653531 out.go:177]   - NO_PROXY=192.168.39.16
	W0701 12:24:45.031921  653531 proxy.go:119] fail to check proxy env: Error ip not in block
	I0701 12:24:45.031959  653531 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:24:45.032658  653531 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:24:45.032888  653531 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:24:45.033013  653531 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0701 12:24:45.033058  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	W0701 12:24:45.033081  653531 proxy.go:119] fail to check proxy env: Error ip not in block
	I0701 12:24:45.033171  653531 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0701 12:24:45.033195  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:24:45.035752  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:45.035981  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:45.036219  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:45.036245  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:45.036348  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:45.036378  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:45.036406  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:24:45.036593  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:24:45.036652  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:45.036754  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:45.036826  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:24:45.036903  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:24:45.036969  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa Username:docker}
	I0701 12:24:45.037025  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa Username:docker}
	W0701 12:24:45.137872  653531 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0701 12:24:45.137946  653531 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0701 12:24:45.154683  653531 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0701 12:24:45.154717  653531 start.go:494] detecting cgroup driver to use...
	I0701 12:24:45.154827  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 12:24:45.176886  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0701 12:24:45.188345  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0701 12:24:45.197947  653531 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0701 12:24:45.198012  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0701 12:24:45.207676  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0701 12:24:45.217559  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0701 12:24:45.227803  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0701 12:24:45.238295  653531 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0701 12:24:45.248764  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0701 12:24:45.258909  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0701 12:24:45.268726  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0701 12:24:45.279039  653531 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0701 12:24:45.288042  653531 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0701 12:24:45.296914  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:24:45.411404  653531 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0701 12:24:45.436012  653531 start.go:494] detecting cgroup driver to use...
	I0701 12:24:45.436122  653531 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0701 12:24:45.450142  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 12:24:45.462829  653531 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0701 12:24:45.481152  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 12:24:45.494283  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 12:24:45.507074  653531 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0701 12:24:45.534155  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 12:24:45.547185  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 12:24:45.564773  653531 ssh_runner.go:195] Run: which cri-dockerd
	I0701 12:24:45.568760  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0701 12:24:45.577542  653531 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0701 12:24:45.593021  653531 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0701 12:24:45.701211  653531 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0701 12:24:45.815750  653531 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0701 12:24:45.815810  653531 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0701 12:24:45.831989  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:24:45.941168  653531 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0701 12:24:48.340550  653531 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.399331326s)
	I0701 12:24:48.340643  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0701 12:24:48.354582  653531 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0701 12:24:48.370449  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0701 12:24:48.383634  653531 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0701 12:24:48.491334  653531 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0701 12:24:48.612412  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:24:48.742773  653531 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0701 12:24:48.759856  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0701 12:24:48.772621  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:24:48.884376  653531 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0701 12:24:48.964457  653531 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0701 12:24:48.964538  653531 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0701 12:24:48.970016  653531 start.go:562] Will wait 60s for crictl version
	I0701 12:24:48.970082  653531 ssh_runner.go:195] Run: which crictl
	I0701 12:24:48.974017  653531 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0701 12:24:49.010380  653531 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.1
	RuntimeApiVersion:  v1
	I0701 12:24:49.010470  653531 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0701 12:24:49.038204  653531 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0701 12:24:49.060452  653531 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.1 ...
	I0701 12:24:49.061662  653531 out.go:177]   - env NO_PROXY=192.168.39.16
	I0701 12:24:49.062894  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetIP
	I0701 12:24:49.065420  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:49.065726  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:49.065756  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:49.065973  653531 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0701 12:24:49.070110  653531 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 12:24:49.082188  653531 mustload.go:65] Loading cluster: ha-735960
	I0701 12:24:49.082530  653531 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:24:49.082941  653531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:24:49.082993  653531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:24:49.097892  653531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43153
	I0701 12:24:49.098396  653531 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:24:49.098894  653531 main.go:141] libmachine: Using API Version  1
	I0701 12:24:49.098917  653531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:24:49.099215  653531 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:24:49.099436  653531 main.go:141] libmachine: (ha-735960) Calling .GetState
	I0701 12:24:49.100798  653531 host.go:66] Checking if "ha-735960" exists ...
	I0701 12:24:49.101079  653531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:24:49.101112  653531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:24:49.115736  653531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34567
	I0701 12:24:49.116185  653531 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:24:49.116654  653531 main.go:141] libmachine: Using API Version  1
	I0701 12:24:49.116678  653531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:24:49.117007  653531 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:24:49.117203  653531 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:24:49.117366  653531 certs.go:68] Setting up /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960 for IP: 192.168.39.86
	I0701 12:24:49.117380  653531 certs.go:194] generating shared ca certs ...
	I0701 12:24:49.117398  653531 certs.go:226] acquiring lock for ca certs: {Name:mk34e166bfd069e523b2325e14d1812c523bff53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:24:49.117551  653531 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key
	I0701 12:24:49.117591  653531 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key
	I0701 12:24:49.117600  653531 certs.go:256] generating profile certs ...
	I0701 12:24:49.117669  653531 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/client.key
	I0701 12:24:49.117728  653531 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key.b19d6c48
	I0701 12:24:49.117760  653531 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.key
	I0701 12:24:49.117771  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0701 12:24:49.117786  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0701 12:24:49.117800  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0701 12:24:49.117811  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0701 12:24:49.117823  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0701 12:24:49.117835  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0701 12:24:49.117847  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0701 12:24:49.117858  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0701 12:24:49.117903  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem (1338 bytes)
	W0701 12:24:49.117934  653531 certs.go:480] ignoring /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854_empty.pem, impossibly tiny 0 bytes
	I0701 12:24:49.117946  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem (1675 bytes)
	I0701 12:24:49.117973  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem (1078 bytes)
	I0701 12:24:49.117994  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem (1123 bytes)
	I0701 12:24:49.118013  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem (1675 bytes)
	I0701 12:24:49.118048  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem (1708 bytes)
	I0701 12:24:49.118076  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem -> /usr/share/ca-certificates/637854.pem
	I0701 12:24:49.118092  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> /usr/share/ca-certificates/6378542.pem
	I0701 12:24:49.118104  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:24:49.118150  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:24:49.120907  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:49.121392  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:49.121418  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:49.121523  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:24:49.121694  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:49.121825  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:24:49.121959  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa Username:docker}
	I0701 12:24:49.190715  653531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0701 12:24:49.195755  653531 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0701 12:24:49.206197  653531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0701 12:24:49.209869  653531 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0701 12:24:49.219170  653531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0701 12:24:49.223114  653531 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0701 12:24:49.233000  653531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0701 12:24:49.237162  653531 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0701 12:24:49.246812  653531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0701 12:24:49.250554  653531 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0701 12:24:49.259926  653531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0701 12:24:49.263843  653531 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0701 12:24:49.274536  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0701 12:24:49.299467  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0701 12:24:49.322887  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0701 12:24:49.345311  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0701 12:24:49.367988  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0701 12:24:49.390632  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0701 12:24:49.416047  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0701 12:24:49.439560  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0701 12:24:49.462382  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem --> /usr/share/ca-certificates/637854.pem (1338 bytes)
	I0701 12:24:49.484590  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem --> /usr/share/ca-certificates/6378542.pem (1708 bytes)
	I0701 12:24:49.507507  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0701 12:24:49.529932  653531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0701 12:24:49.545966  653531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0701 12:24:49.561557  653531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0701 12:24:49.577402  653531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0701 12:24:49.593250  653531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0701 12:24:49.609739  653531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0701 12:24:49.626015  653531 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0701 12:24:49.643897  653531 ssh_runner.go:195] Run: openssl version
	I0701 12:24:49.649608  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/637854.pem && ln -fs /usr/share/ca-certificates/637854.pem /etc/ssl/certs/637854.pem"
	I0701 12:24:49.660203  653531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/637854.pem
	I0701 12:24:49.664449  653531 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  1 12:11 /usr/share/ca-certificates/637854.pem
	I0701 12:24:49.664503  653531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/637854.pem
	I0701 12:24:49.670228  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/637854.pem /etc/ssl/certs/51391683.0"
	I0701 12:24:49.680554  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6378542.pem && ln -fs /usr/share/ca-certificates/6378542.pem /etc/ssl/certs/6378542.pem"
	I0701 12:24:49.690901  653531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6378542.pem
	I0701 12:24:49.695200  653531 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  1 12:11 /usr/share/ca-certificates/6378542.pem
	I0701 12:24:49.695266  653531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6378542.pem
	I0701 12:24:49.700503  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6378542.pem /etc/ssl/certs/3ec20f2e.0"
	I0701 12:24:49.710442  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0701 12:24:49.720297  653531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:24:49.724530  653531 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  1 12:05 /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:24:49.724590  653531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:24:49.729832  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0701 12:24:49.739574  653531 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0701 12:24:49.743717  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0701 12:24:49.749498  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0701 12:24:49.755217  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0701 12:24:49.761210  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0701 12:24:49.767138  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0701 12:24:49.772853  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0701 12:24:49.778598  653531 kubeadm.go:928] updating node {m02 192.168.39.86 8443 v1.30.2 docker true true} ...
	I0701 12:24:49.778706  653531 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-735960-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.86
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-735960 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0701 12:24:49.778735  653531 kube-vip.go:115] generating kube-vip config ...
	I0701 12:24:49.778769  653531 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0701 12:24:49.792722  653531 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0701 12:24:49.792794  653531 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0701 12:24:49.792861  653531 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0701 12:24:49.804161  653531 binaries.go:44] Found k8s binaries, skipping transfer
	I0701 12:24:49.804241  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0701 12:24:49.814550  653531 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0701 12:24:49.831390  653531 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0701 12:24:49.848397  653531 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0701 12:24:49.865443  653531 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0701 12:24:49.869104  653531 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 12:24:49.880669  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:24:49.995061  653531 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0701 12:24:50.012084  653531 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:24:50.012461  653531 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:24:50.014165  653531 out.go:177] * Verifying Kubernetes components...
	I0701 12:24:50.015753  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:24:50.164868  653531 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0701 12:24:50.189841  653531 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19166-630650/kubeconfig
	I0701 12:24:50.190056  653531 kapi.go:59] client config for ha-735960: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/client.crt", KeyFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/client.key", CAFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfbb80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0701 12:24:50.190130  653531 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.16:8443
	I0701 12:24:50.190323  653531 node_ready.go:35] waiting up to 6m0s for node "ha-735960-m02" to be "Ready" ...
	I0701 12:24:50.190456  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:24:50.190466  653531 round_trippers.go:469] Request Headers:
	I0701 12:24:50.190477  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:24:50.190487  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:24:54.343288  653531 round_trippers.go:574] Response Status:  in 4152 milliseconds
	I0701 12:24:55.343662  653531 with_retry.go:234] Got a Retry-After 1s response for attempt 1 to https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:24:55.343730  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:24:55.343744  653531 round_trippers.go:469] Request Headers:
	I0701 12:24:55.343754  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:24:55.343758  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:24:55.344302  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:24:55.344422  653531 node_ready.go:53] error getting node "ha-735960-m02": Get "https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.1:52872->192.168.39.16:8443: read: connection reset by peer
	I0701 12:24:55.344514  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:24:55.344528  653531 round_trippers.go:469] Request Headers:
	I0701 12:24:55.344538  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:24:55.344544  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:24:55.344874  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:24:55.691490  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:24:55.691516  653531 round_trippers.go:469] Request Headers:
	I0701 12:24:55.691527  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:24:55.691533  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:24:55.691976  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:24:56.190655  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:24:56.190680  653531 round_trippers.go:469] Request Headers:
	I0701 12:24:56.190689  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:24:56.190694  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:24:56.191223  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:24:56.690634  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:24:56.690660  653531 round_trippers.go:469] Request Headers:
	I0701 12:24:56.690669  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:24:56.690672  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:24:56.691171  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:24:57.190543  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:24:57.190576  653531 round_trippers.go:469] Request Headers:
	I0701 12:24:57.190588  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:24:57.190593  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:24:57.191164  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:24:57.691155  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:24:57.691185  653531 round_trippers.go:469] Request Headers:
	I0701 12:24:57.691197  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:24:57.691205  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:24:57.691722  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:24:57.691807  653531 node_ready.go:53] error getting node "ha-735960-m02": Get "https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02": dial tcp 192.168.39.16:8443: connect: connection refused
	I0701 12:24:58.190799  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:24:58.190827  653531 round_trippers.go:469] Request Headers:
	I0701 12:24:58.190841  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:24:58.190847  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:24:58.191262  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:24:58.690909  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:24:58.690934  653531 round_trippers.go:469] Request Headers:
	I0701 12:24:58.690943  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:24:58.690947  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:24:58.691435  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:24:59.191343  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:24:59.191369  653531 round_trippers.go:469] Request Headers:
	I0701 12:24:59.191379  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:24:59.191385  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:24:59.191790  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:24:59.691540  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:24:59.691570  653531 round_trippers.go:469] Request Headers:
	I0701 12:24:59.691582  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:24:59.691587  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:24:59.692063  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:24:59.692155  653531 node_ready.go:53] error getting node "ha-735960-m02": Get "https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02": dial tcp 192.168.39.16:8443: connect: connection refused
	I0701 12:25:00.190742  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:00.190767  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:00.190776  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:00.190780  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:00.191351  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:00.691648  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:00.691679  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:00.691691  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:00.691697  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:00.692126  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:01.190745  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:01.190769  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:01.190778  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:01.190784  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:01.191282  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:01.691565  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:01.691597  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:01.691614  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:01.691621  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:01.692000  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:02.191662  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:02.191693  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:02.191706  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:02.191714  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:02.192140  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:02.192224  653531 node_ready.go:53] error getting node "ha-735960-m02": Get "https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02": dial tcp 192.168.39.16:8443: connect: connection refused
	I0701 12:25:02.691148  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:02.691173  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:02.691180  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:02.691185  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:02.691566  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:03.190561  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:03.190591  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:03.190603  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:03.190611  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:03.191147  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:03.690811  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:03.690839  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:03.690849  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:03.690854  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:03.691458  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:04.191099  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:04.191130  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:04.191142  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:04.191147  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:04.191609  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:04.691342  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:04.691368  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:04.691376  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:04.691380  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:04.691811  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:04.691897  653531 node_ready.go:53] error getting node "ha-735960-m02": Get "https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02": dial tcp 192.168.39.16:8443: connect: connection refused
	I0701 12:25:05.191508  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:05.191532  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:05.191540  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:05.191550  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:05.192027  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:05.690552  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:05.690579  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:05.690588  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:05.690592  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:05.691114  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:06.190741  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:06.190773  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:06.190785  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:06.190790  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:06.191210  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:06.690600  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:06.690630  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:06.690640  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:06.690646  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:06.691129  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:07.191607  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:07.191631  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:07.191639  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:07.191643  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:07.192193  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:07.192283  653531 node_ready.go:53] error getting node "ha-735960-m02": Get "https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02": dial tcp 192.168.39.16:8443: connect: connection refused
	I0701 12:25:07.691099  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:07.691129  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:07.691140  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:07.691145  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:07.691572  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:08.191598  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:08.191623  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:08.191632  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:08.191636  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:08.192026  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:08.690679  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:08.690702  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:08.690713  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:08.690717  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:08.691142  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:09.190900  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:09.190924  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:09.190932  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:09.190938  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:09.191395  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:09.690594  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:09.690615  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:09.690623  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:09.690629  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:11.690040  653531 round_trippers.go:574] Response Status: 200 OK in 1999 milliseconds
	I0701 12:25:11.702263  653531 node_ready.go:49] node "ha-735960-m02" has status "Ready":"True"
	I0701 12:25:11.702299  653531 node_ready.go:38] duration metric: took 21.511933368s for node "ha-735960-m02" to be "Ready" ...
	I0701 12:25:11.702313  653531 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 12:25:11.702416  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods
	I0701 12:25:11.702430  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:11.702441  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:11.702454  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:11.789461  653531 round_trippers.go:574] Response Status: 200 OK in 86 milliseconds
	I0701 12:25:11.802344  653531 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace to be "Ready" ...
	I0701 12:25:11.802466  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:11.802476  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:11.802483  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:11.802487  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:11.816015  653531 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0701 12:25:11.816768  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:11.816789  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:11.816801  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:11.816808  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:11.831063  653531 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0701 12:25:12.302968  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:12.302992  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:12.303000  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:12.303004  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:12.307067  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:12.308122  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:12.308138  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:12.308146  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:12.308150  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:12.311874  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:12.803638  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:12.803667  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:12.803679  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:12.803686  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:12.814049  653531 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0701 12:25:12.814887  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:12.814910  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:12.814921  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:12.814925  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:12.821738  653531 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0701 12:25:13.303576  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:13.303600  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:13.303608  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:13.303614  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:13.307218  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:13.308090  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:13.308106  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:13.308113  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:13.308117  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:13.311302  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:13.803234  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:13.803266  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:13.803274  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:13.803277  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:13.806287  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:13.807004  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:13.807020  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:13.807029  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:13.807032  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:13.809746  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:13.810211  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:14.302637  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:14.302668  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:14.302676  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:14.302680  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:14.306137  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:14.306904  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:14.306920  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:14.306928  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:14.306932  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:14.309754  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:14.802564  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:14.802587  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:14.802595  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:14.802599  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:14.808775  653531 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0701 12:25:14.809568  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:14.809588  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:14.809596  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:14.809601  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:14.812414  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:15.303353  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:15.303378  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:15.303386  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:15.303391  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:15.306881  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:15.307679  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:15.307702  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:15.307712  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:15.307721  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:15.310551  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:15.802545  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:15.802569  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:15.802577  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:15.802582  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:15.806303  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:15.807445  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:15.807462  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:15.807473  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:15.807479  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:15.813688  653531 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0701 12:25:15.814187  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:16.303627  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:16.303655  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:16.303664  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:16.303667  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:16.307153  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:16.307819  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:16.307838  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:16.307848  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:16.307854  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:16.317298  653531 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0701 12:25:16.802946  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:16.802971  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:16.802979  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:16.802985  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:16.806421  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:16.807100  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:16.807120  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:16.807130  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:16.807135  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:16.809697  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:17.302581  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:17.302628  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:17.302640  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:17.302648  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:17.307226  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:17.307905  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:17.307922  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:17.307929  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:17.307936  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:17.311203  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:17.803470  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:17.803514  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:17.803526  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:17.803531  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:17.812734  653531 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0701 12:25:17.813577  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:17.813595  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:17.813601  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:17.813608  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:17.818648  653531 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0701 12:25:17.819270  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:18.302575  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:18.302597  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:18.302605  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:18.302610  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:18.306847  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:18.307906  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:18.307927  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:18.307937  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:18.307943  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:18.310841  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:18.802657  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:18.802681  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:18.802689  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:18.802692  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:18.805685  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:18.806415  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:18.806434  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:18.806444  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:18.806451  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:18.809781  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:19.303618  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:19.303642  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:19.303650  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:19.303655  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:19.307473  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:19.308257  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:19.308275  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:19.308282  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:19.308286  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:19.311108  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:19.802669  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:19.802691  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:19.802700  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:19.802703  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:19.805915  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:19.806623  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:19.806641  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:19.806648  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:19.806653  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:19.809291  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:20.303135  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:20.303161  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:20.303169  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:20.303173  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:20.306861  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:20.307600  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:20.307618  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:20.307626  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:20.307630  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:20.310953  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:20.311503  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:20.803608  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:20.803633  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:20.803642  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:20.803645  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:20.807878  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:20.808941  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:20.808961  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:20.808969  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:20.808973  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:20.811817  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:21.303623  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:21.303648  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:21.303658  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:21.303662  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:21.307962  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:21.308821  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:21.308839  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:21.308846  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:21.308850  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:21.311792  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:21.803197  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:21.803227  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:21.803239  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:21.803244  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:21.806108  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:21.807085  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:21.807105  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:21.807138  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:21.807147  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:21.809757  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:22.302567  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:22.302593  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:22.302601  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:22.302608  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:22.306177  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:22.307066  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:22.307082  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:22.307091  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:22.307097  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:22.309849  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:22.803488  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:22.803511  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:22.803519  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:22.803523  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:22.807098  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:22.807809  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:22.807828  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:22.807839  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:22.807846  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:22.810906  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:22.811518  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:23.303611  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:23.303700  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:23.303719  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:23.303725  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:23.307759  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:23.308638  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:23.308659  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:23.308669  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:23.308674  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:23.312265  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:23.803188  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:23.803211  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:23.803222  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:23.803227  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:23.808854  653531 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0701 12:25:23.810030  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:23.810047  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:23.810057  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:23.810066  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:23.813689  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:24.303587  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:24.303609  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:24.303617  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:24.303622  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:24.306935  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:24.307770  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:24.307786  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:24.307794  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:24.307798  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:24.318402  653531 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0701 12:25:24.803269  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:24.803292  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:24.803302  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:24.803307  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:24.806559  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:24.807235  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:24.807252  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:24.807259  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:24.807264  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:24.809568  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:25.303424  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:25.303447  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:25.303457  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:25.303462  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:25.306169  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:25.306850  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:25.306869  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:25.306877  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:25.306881  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:25.309797  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:25.310316  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:25.803598  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:25.803625  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:25.803636  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:25.803641  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:25.807180  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:25.808080  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:25.808098  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:25.808106  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:25.808110  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:25.810694  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:26.303736  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:26.303758  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:26.303769  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:26.303774  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:26.307524  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:26.308268  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:26.308293  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:26.308304  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:26.308309  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:26.311520  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:26.803295  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:26.803319  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:26.803328  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:26.803332  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:26.806546  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:26.807183  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:26.807197  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:26.807204  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:26.807208  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:26.809974  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:27.302802  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:27.302827  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:27.302836  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:27.302840  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:27.305889  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:27.306573  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:27.306591  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:27.306598  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:27.306602  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:27.309203  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:27.802871  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:27.802896  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:27.802904  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:27.802908  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:27.806439  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:27.807255  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:27.807275  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:27.807283  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:27.807286  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:27.810137  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:27.810761  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:28.303255  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:28.303283  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:28.303295  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:28.303300  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:28.306809  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:28.307731  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:28.307752  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:28.307762  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:28.307768  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:28.311028  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:28.802544  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:28.802570  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:28.802580  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:28.802585  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:28.805960  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:28.806724  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:28.806740  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:28.806815  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:28.806826  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:28.809472  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:29.303397  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:29.303427  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:29.303438  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:29.303443  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:29.306785  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:29.307565  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:29.307584  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:29.307592  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:29.307596  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:29.310517  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:29.802683  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:29.802709  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:29.802717  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:29.802720  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:29.806680  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:29.807385  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:29.807404  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:29.807414  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:29.807420  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:29.810474  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:29.811143  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:30.303599  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:30.303629  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:30.303639  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:30.303643  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:30.307801  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:30.308475  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:30.308491  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:30.308498  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:30.308503  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:30.311947  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:30.802655  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:30.802680  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:30.802688  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:30.802692  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:30.806031  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:30.806743  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:30.806762  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:30.806769  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:30.806774  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:30.809315  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:31.303311  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:31.303340  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:31.303350  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:31.303354  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:31.306583  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:31.307361  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:31.307384  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:31.307395  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:31.307399  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:31.311058  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:31.802712  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:31.802740  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:31.802749  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:31.802753  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:31.806584  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:31.807317  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:31.807336  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:31.807347  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:31.807361  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:31.810401  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:32.303636  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:32.303663  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:32.303671  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:32.303676  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:32.307011  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:32.307797  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:32.307815  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:32.307825  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:32.307831  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:32.314944  653531 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0701 12:25:32.315492  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:32.802803  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:32.802830  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:32.802838  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:32.802844  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:32.807127  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:32.807884  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:32.807907  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:32.807917  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:32.807922  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:32.811565  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:33.303372  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:33.303399  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:33.303416  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:33.303421  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:33.307271  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:33.307961  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:33.307981  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:33.307988  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:33.308001  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:33.310760  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:33.802604  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:33.802631  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:33.802640  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:33.802643  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:33.806300  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:33.807219  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:33.807238  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:33.807245  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:33.807250  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:33.810578  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:34.303606  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:34.303632  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:34.303640  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:34.303644  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:34.308029  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:34.309132  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:34.309159  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:34.309172  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:34.309180  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:34.313056  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:34.803231  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:34.803261  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:34.803273  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:34.803278  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:34.806971  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:34.807591  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:34.807609  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:34.807617  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:34.807621  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:34.810457  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:34.810998  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:35.303350  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:35.303377  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:35.303386  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:35.303390  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:35.307557  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:35.310343  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:35.310361  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:35.310370  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:35.310374  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:35.314047  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:35.803318  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:35.803343  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:35.803352  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:35.803355  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:35.806663  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:35.807415  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:35.807435  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:35.807451  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:35.807460  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:35.810577  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:36.303513  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:36.303545  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:36.303577  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:36.303584  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:36.307367  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:36.308070  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:36.308089  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:36.308100  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:36.308106  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:36.312298  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:36.803266  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:36.803291  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:36.803299  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:36.803303  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:36.807158  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:36.807888  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:36.807906  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:36.807913  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:36.807918  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:36.811315  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:36.811752  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:37.303051  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:37.303079  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:37.303090  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:37.303094  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:37.307312  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:37.308243  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:37.308264  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:37.308275  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:37.308282  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:37.311883  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:37.802545  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:37.802572  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:37.802581  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:37.802585  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:37.805697  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:37.806592  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:37.806612  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:37.806622  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:37.806627  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:37.809149  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:38.302574  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:38.302602  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:38.302615  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:38.302621  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:38.306531  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:38.307159  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:38.307178  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:38.307189  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:38.307193  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:38.310496  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:38.803467  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:38.803495  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:38.803504  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:38.803509  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:38.807052  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:38.807927  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:38.807944  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:38.807951  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:38.807956  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:38.810712  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:39.302764  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:39.302790  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:39.302801  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:39.302805  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:39.306507  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:39.307614  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:39.307633  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:39.307641  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:39.307645  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:39.311327  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:39.311854  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:39.803193  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:39.803216  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:39.803225  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:39.803229  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:39.806519  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:39.807496  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:39.807515  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:39.807525  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:39.807532  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:39.810711  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:40.303599  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:40.303624  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:40.303633  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:40.303637  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:40.307414  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:40.308201  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:40.308227  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:40.308236  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:40.308242  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:40.313547  653531 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0701 12:25:40.803513  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:40.803535  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:40.803543  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:40.803548  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:40.806979  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:40.807738  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:40.807753  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:40.807761  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:40.807765  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:40.810649  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:41.303319  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:41.303343  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:41.303351  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:41.303355  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:41.307376  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:41.307943  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:41.307958  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:41.307965  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:41.307970  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:41.311161  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:41.803525  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:41.803549  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:41.803556  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:41.803559  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:41.806564  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:41.807431  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:41.807453  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:41.807464  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:41.807470  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:41.810527  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:41.811143  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:42.303619  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:42.303650  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:42.303662  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:42.303670  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:42.307838  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:42.308516  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:42.308536  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:42.308544  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:42.308550  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:42.312418  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:42.803505  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:42.803530  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:42.803540  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:42.803543  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:42.807116  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:42.808027  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:42.808044  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:42.808051  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:42.808055  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:42.810713  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:43.303632  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:43.303654  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:43.303664  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:43.303668  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:43.307247  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:43.307986  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:43.308002  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:43.308009  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:43.308013  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:43.310824  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:43.802592  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:43.802620  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:43.802628  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:43.802632  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:43.806238  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:43.807037  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:43.807059  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:43.807072  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:43.807076  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:43.809889  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:44.302994  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:44.303018  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:44.303026  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:44.303030  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:44.306644  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:44.307454  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:44.307470  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:44.307478  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:44.307482  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:44.311122  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:44.311762  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:44.803237  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:44.803267  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:44.803279  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:44.803286  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:44.807350  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:44.808020  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:44.808038  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:44.808045  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:44.808051  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:44.810846  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:45.302711  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:45.302735  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:45.302744  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:45.302748  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:45.306615  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:45.307478  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:45.307497  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:45.307508  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:45.307514  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:45.310453  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:45.803401  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:45.803428  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:45.803439  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:45.803444  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:45.807308  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:45.808014  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:45.808029  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:45.808036  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:45.808039  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:45.810822  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:46.302557  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:46.302584  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:46.302597  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:46.302601  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:46.306132  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:46.306862  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:46.306879  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:46.306888  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:46.306894  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:46.310611  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:46.803427  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:46.803455  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:46.803467  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:46.803474  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:46.807174  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:46.807896  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:46.807913  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:46.807921  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:46.807924  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:46.810938  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:46.811392  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:47.302820  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:47.302850  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:47.302859  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:47.302863  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:47.306419  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:47.307190  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:47.307211  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:47.307218  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:47.307222  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:47.309980  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:47.803501  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:47.803525  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:47.803534  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:47.803537  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:47.808075  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:47.808877  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:47.808896  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:47.808905  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:47.808910  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:47.815820  653531 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0701 12:25:48.302668  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:48.302699  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:48.302709  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:48.302716  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:48.308126  653531 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0701 12:25:48.308931  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:48.308949  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:48.308960  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:48.308965  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:48.317071  653531 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0701 12:25:48.802646  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:48.802669  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:48.802678  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:48.802682  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:48.807515  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:48.808381  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:48.808403  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:48.808413  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:48.808422  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:48.811034  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:48.811475  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:49.303193  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:49.303217  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:49.303225  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:49.303230  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:49.307574  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:49.308269  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:49.308285  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:49.308293  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:49.308297  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:49.312047  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:49.802745  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:49.802768  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:49.802776  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:49.802780  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:49.806546  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:49.807294  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:49.807313  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:49.807321  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:49.807326  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:49.810700  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:50.303644  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:50.303674  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:50.303684  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:50.303688  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:50.308034  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:50.308788  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:50.308807  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:50.308817  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:50.308823  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:50.313190  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:50.802959  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:50.802983  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:50.802992  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:50.802996  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:50.806875  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:50.807540  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:50.807558  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:50.807566  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:50.807571  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:50.810319  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:51.303292  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:51.303322  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:51.303334  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:51.303339  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:51.307067  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:51.307838  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:51.307858  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:51.307869  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:51.307875  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:51.312843  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:51.313579  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:51.803287  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:51.803312  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:51.803323  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:51.803329  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:51.807231  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:51.807995  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:51.808012  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:51.808020  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:51.808024  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:51.810740  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:52.303605  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:52.303629  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:52.303638  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:52.303643  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:52.306821  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:52.307565  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:52.307584  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:52.307594  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:52.307602  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:52.311075  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:52.803586  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:52.803610  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:52.803619  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:52.803623  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:52.807457  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:52.808236  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:52.808255  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:52.808266  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:52.808272  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:52.811703  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:53.303621  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:53.303644  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:53.303652  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:53.303656  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:53.310115  653531 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0701 12:25:53.310845  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:53.310863  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:53.310874  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:53.310878  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:53.313553  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:53.314016  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:53.803325  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:53.803349  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:53.803357  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:53.803361  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:53.806896  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:53.807585  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:53.807601  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:53.807608  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:53.807613  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:53.810245  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:54.302928  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:54.302952  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:54.302960  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:54.302963  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:54.306523  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:54.307165  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:54.307184  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:54.307195  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:54.307203  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:54.310455  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:54.803344  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:54.803367  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:54.803377  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:54.803380  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:54.806607  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:54.807210  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:54.807225  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:54.807233  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:54.807236  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:54.809746  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:55.303597  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:55.303623  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:55.303633  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:55.303637  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:55.307054  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:55.307759  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:55.307774  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:55.307781  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:55.307788  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:55.313043  653531 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0701 12:25:55.802698  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:55.802725  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:55.802736  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:55.802745  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:55.805918  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:55.806665  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:55.806682  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:55.806690  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:55.806694  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:55.809347  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:55.809833  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:56.303433  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:56.303460  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:56.303471  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:56.303479  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:56.307327  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:56.308094  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:56.308118  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:56.308126  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:56.308130  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:56.311241  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:56.803577  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:56.803605  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:56.803612  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:56.803616  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:56.806932  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:56.807699  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:56.807716  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:56.807724  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:56.807727  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:56.812547  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:57.303545  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:57.303573  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:57.303582  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:57.303586  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:57.307516  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:57.308162  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:57.308179  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:57.308186  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:57.308193  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:57.310961  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:57.803457  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:57.803482  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:57.803493  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:57.803500  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:57.807806  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:57.808679  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:57.808694  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:57.808704  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:57.808711  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:57.811544  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:57.811984  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:58.303446  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:58.303471  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:58.303480  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:58.303484  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:58.307082  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:58.307737  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:58.307754  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:58.307762  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:58.307770  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:58.310778  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:58.803647  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:58.803671  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:58.803680  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:58.803690  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:58.807621  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:58.808241  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:58.808258  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:58.808266  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:58.808271  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:58.811002  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:59.302934  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:59.302961  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:59.302971  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:59.302976  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:59.306476  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:59.307188  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:59.307205  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:59.307213  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:59.307216  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:59.312012  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:59.803004  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:59.803028  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:59.803037  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:59.803041  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:59.806220  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:59.807058  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:59.807077  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:59.807083  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:59.807087  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:59.810042  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:59.810618  653531 pod_ready.go:92] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"True"
	I0701 12:25:59.810639  653531 pod_ready.go:81] duration metric: took 48.008262746s for pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace to be "Ready" ...
	I0701 12:25:59.810648  653531 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-p4rtz" in "kube-system" namespace to be "Ready" ...
	I0701 12:25:59.810702  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-p4rtz
	I0701 12:25:59.810709  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:59.810716  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:59.810720  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:59.813396  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:59.813957  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:59.813972  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:59.813979  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:59.813982  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:59.816606  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:59.816994  653531 pod_ready.go:92] pod "coredns-7db6d8ff4d-p4rtz" in "kube-system" namespace has status "Ready":"True"
	I0701 12:25:59.817012  653531 pod_ready.go:81] duration metric: took 6.357752ms for pod "coredns-7db6d8ff4d-p4rtz" in "kube-system" namespace to be "Ready" ...
	I0701 12:25:59.817021  653531 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:25:59.817069  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960
	I0701 12:25:59.817076  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:59.817084  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:59.817090  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:59.819509  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:59.819970  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:59.819984  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:59.819991  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:59.819995  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:59.822382  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:59.822919  653531 pod_ready.go:92] pod "etcd-ha-735960" in "kube-system" namespace has status "Ready":"True"
	I0701 12:25:59.822941  653531 pod_ready.go:81] duration metric: took 5.912537ms for pod "etcd-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:25:59.822951  653531 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:25:59.823013  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m02
	I0701 12:25:59.823021  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:59.823028  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:59.823032  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:59.825241  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:59.825771  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:59.825785  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:59.825791  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:59.825795  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:59.828111  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:59.828706  653531 pod_ready.go:92] pod "etcd-ha-735960-m02" in "kube-system" namespace has status "Ready":"True"
	I0701 12:25:59.828725  653531 pod_ready.go:81] duration metric: took 5.760203ms for pod "etcd-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:25:59.828740  653531 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:25:59.828804  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:25:59.828813  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:59.828820  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:59.828827  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:59.832068  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:59.832863  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:25:59.832878  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:59.832885  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:59.832892  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:59.835452  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:59.835992  653531 pod_ready.go:97] node "ha-735960-m03" hosting pod "etcd-ha-735960-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-735960-m03" has status "Ready":"Unknown"
	I0701 12:25:59.836024  653531 pod_ready.go:81] duration metric: took 7.273472ms for pod "etcd-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	E0701 12:25:59.836031  653531 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-735960-m03" hosting pod "etcd-ha-735960-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-735960-m03" has status "Ready":"Unknown"
	I0701 12:25:59.836046  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:00.003492  653531 request.go:629] Waited for 167.376104ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960
	I0701 12:26:00.003566  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960
	I0701 12:26:00.003574  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:00.003585  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:00.003603  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:00.011681  653531 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0701 12:26:00.203578  653531 request.go:629] Waited for 191.210292ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:26:00.203641  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:26:00.203647  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:00.203654  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:00.203664  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:00.207391  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:00.207910  653531 pod_ready.go:92] pod "kube-apiserver-ha-735960" in "kube-system" namespace has status "Ready":"True"
	I0701 12:26:00.207934  653531 pod_ready.go:81] duration metric: took 371.877302ms for pod "kube-apiserver-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:00.207946  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:00.403020  653531 request.go:629] Waited for 194.98389ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m02
	I0701 12:26:00.403111  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m02
	I0701 12:26:00.403119  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:00.403141  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:00.403168  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:00.406515  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:00.603670  653531 request.go:629] Waited for 196.408497ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:26:00.603756  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:26:00.603766  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:00.603776  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:00.603787  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:00.607641  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:00.608254  653531 pod_ready.go:92] pod "kube-apiserver-ha-735960-m02" in "kube-system" namespace has status "Ready":"True"
	I0701 12:26:00.608279  653531 pod_ready.go:81] duration metric: took 400.3268ms for pod "kube-apiserver-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:00.608290  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:00.803335  653531 request.go:629] Waited for 194.970976ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:26:00.803416  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:26:00.803423  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:00.803432  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:00.803437  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:00.806887  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:01.003849  653531 request.go:629] Waited for 196.371058ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:01.003924  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:01.003931  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:01.003942  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:01.003947  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:01.007167  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:01.007625  653531 pod_ready.go:97] node "ha-735960-m03" hosting pod "kube-apiserver-ha-735960-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-735960-m03" has status "Ready":"Unknown"
	I0701 12:26:01.007649  653531 pod_ready.go:81] duration metric: took 399.353356ms for pod "kube-apiserver-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	E0701 12:26:01.007659  653531 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-735960-m03" hosting pod "kube-apiserver-ha-735960-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-735960-m03" has status "Ready":"Unknown"
	I0701 12:26:01.007667  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:01.203752  653531 request.go:629] Waited for 195.992128ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-735960
	I0701 12:26:01.203816  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-735960
	I0701 12:26:01.203821  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:01.203829  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:01.203835  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:01.207391  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:01.403364  653531 request.go:629] Waited for 195.371527ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:26:01.403446  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:26:01.403452  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:01.403460  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:01.403464  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:01.406768  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:01.407262  653531 pod_ready.go:92] pod "kube-controller-manager-ha-735960" in "kube-system" namespace has status "Ready":"True"
	I0701 12:26:01.407282  653531 pod_ready.go:81] duration metric: took 399.606397ms for pod "kube-controller-manager-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:01.407291  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:01.603806  653531 request.go:629] Waited for 196.426419ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-735960-m02
	I0701 12:26:01.603868  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-735960-m02
	I0701 12:26:01.603877  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:01.603885  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:01.603889  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:01.607133  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:01.803115  653531 request.go:629] Waited for 195.29931ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:26:01.803195  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:26:01.803202  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:01.803213  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:01.803220  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:01.806296  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:01.806997  653531 pod_ready.go:92] pod "kube-controller-manager-ha-735960-m02" in "kube-system" namespace has status "Ready":"True"
	I0701 12:26:01.807020  653531 pod_ready.go:81] duration metric: took 399.723075ms for pod "kube-controller-manager-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:01.807032  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:02.003077  653531 request.go:629] Waited for 195.935538ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-735960-m03
	I0701 12:26:02.003184  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-735960-m03
	I0701 12:26:02.003199  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:02.003212  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:02.003220  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:02.008458  653531 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0701 12:26:02.203469  653531 request.go:629] Waited for 194.368942ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:02.203529  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:02.203535  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:02.203542  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:02.203546  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:02.207148  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:02.207764  653531 pod_ready.go:97] node "ha-735960-m03" hosting pod "kube-controller-manager-ha-735960-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-735960-m03" has status "Ready":"Unknown"
	I0701 12:26:02.207791  653531 pod_ready.go:81] duration metric: took 400.749537ms for pod "kube-controller-manager-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	E0701 12:26:02.207804  653531 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-735960-m03" hosting pod "kube-controller-manager-ha-735960-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-735960-m03" has status "Ready":"Unknown"
	I0701 12:26:02.207816  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-25ssf" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:02.403791  653531 request.go:629] Waited for 195.887211ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-25ssf
	I0701 12:26:02.403858  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-25ssf
	I0701 12:26:02.403864  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:02.403874  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:02.403879  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:02.407843  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:02.603935  653531 request.go:629] Waited for 195.282891ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m04
	I0701 12:26:02.604003  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m04
	I0701 12:26:02.604008  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:02.604017  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:02.604024  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:02.607222  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:02.607681  653531 pod_ready.go:97] node "ha-735960-m04" hosting pod "kube-proxy-25ssf" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-735960-m04" has status "Ready":"Unknown"
	I0701 12:26:02.607701  653531 pod_ready.go:81] duration metric: took 399.872451ms for pod "kube-proxy-25ssf" in "kube-system" namespace to be "Ready" ...
	E0701 12:26:02.607710  653531 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-735960-m04" hosting pod "kube-proxy-25ssf" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-735960-m04" has status "Ready":"Unknown"
	I0701 12:26:02.607715  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-776rt" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:02.803135  653531 request.go:629] Waited for 195.335441ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-776rt
	I0701 12:26:02.803208  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-776rt
	I0701 12:26:02.803214  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:02.803221  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:02.803229  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:02.806089  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:03.004065  653531 request.go:629] Waited for 197.373789ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:03.004141  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:03.004150  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:03.004158  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:03.004174  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:03.007294  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:03.007921  653531 pod_ready.go:97] node "ha-735960-m03" hosting pod "kube-proxy-776rt" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-735960-m03" has status "Ready":"Unknown"
	I0701 12:26:03.007945  653531 pod_ready.go:81] duration metric: took 400.223567ms for pod "kube-proxy-776rt" in "kube-system" namespace to be "Ready" ...
	E0701 12:26:03.007955  653531 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-735960-m03" hosting pod "kube-proxy-776rt" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-735960-m03" has status "Ready":"Unknown"
	I0701 12:26:03.007961  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b6knb" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:03.204042  653531 request.go:629] Waited for 195.997795ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b6knb
	I0701 12:26:03.204129  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b6knb
	I0701 12:26:03.204135  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:03.204143  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:03.204151  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:03.207989  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:03.404038  653531 request.go:629] Waited for 195.374708ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:26:03.404108  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:26:03.404113  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:03.404122  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:03.404127  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:03.407364  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:03.407859  653531 pod_ready.go:92] pod "kube-proxy-b6knb" in "kube-system" namespace has status "Ready":"True"
	I0701 12:26:03.407879  653531 pod_ready.go:81] duration metric: took 399.911763ms for pod "kube-proxy-b6knb" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:03.407889  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lphzn" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:03.603040  653531 request.go:629] Waited for 195.068023ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lphzn
	I0701 12:26:03.603123  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lphzn
	I0701 12:26:03.603128  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:03.603137  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:03.603141  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:03.606547  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:03.803798  653531 request.go:629] Waited for 196.387613ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:26:03.803870  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:26:03.803875  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:03.803883  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:03.803888  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:03.807381  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:03.807877  653531 pod_ready.go:92] pod "kube-proxy-lphzn" in "kube-system" namespace has status "Ready":"True"
	I0701 12:26:03.807898  653531 pod_ready.go:81] duration metric: took 400.000751ms for pod "kube-proxy-lphzn" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:03.807907  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:04.004031  653531 request.go:629] Waited for 196.031388ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960
	I0701 12:26:04.004089  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960
	I0701 12:26:04.004095  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:04.004107  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:04.004115  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:04.007598  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:04.204058  653531 request.go:629] Waited for 195.850938ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:26:04.204148  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:26:04.204158  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:04.204172  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:04.204181  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:04.207457  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:04.208086  653531 pod_ready.go:92] pod "kube-scheduler-ha-735960" in "kube-system" namespace has status "Ready":"True"
	I0701 12:26:04.208102  653531 pod_ready.go:81] duration metric: took 400.189366ms for pod "kube-scheduler-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:04.208112  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:04.403245  653531 request.go:629] Waited for 195.048743ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960-m02
	I0701 12:26:04.403318  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960-m02
	I0701 12:26:04.403323  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:04.403331  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:04.403335  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:04.406662  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:04.603781  653531 request.go:629] Waited for 196.396031ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:26:04.603851  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:26:04.603858  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:04.603868  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:04.603872  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:04.607382  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:04.607837  653531 pod_ready.go:92] pod "kube-scheduler-ha-735960-m02" in "kube-system" namespace has status "Ready":"True"
	I0701 12:26:04.607857  653531 pod_ready.go:81] duration metric: took 399.737176ms for pod "kube-scheduler-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:04.607869  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:04.803931  653531 request.go:629] Waited for 195.967281ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960-m03
	I0701 12:26:04.804004  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960-m03
	I0701 12:26:04.804010  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:04.804018  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:04.804025  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:04.807572  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:05.003764  653531 request.go:629] Waited for 195.365798ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:05.003830  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:05.003836  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:05.003844  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:05.003852  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:05.006888  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:05.007360  653531 pod_ready.go:97] node "ha-735960-m03" hosting pod "kube-scheduler-ha-735960-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-735960-m03" has status "Ready":"Unknown"
	I0701 12:26:05.007379  653531 pod_ready.go:81] duration metric: took 399.502183ms for pod "kube-scheduler-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	E0701 12:26:05.007388  653531 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-735960-m03" hosting pod "kube-scheduler-ha-735960-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-735960-m03" has status "Ready":"Unknown"
	I0701 12:26:05.007396  653531 pod_ready.go:38] duration metric: took 53.305072048s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 12:26:05.007419  653531 api_server.go:52] waiting for apiserver process to appear ...
	I0701 12:26:05.007525  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 12:26:05.023687  653531 logs.go:276] 2 containers: [f615f587cb12 c36c1d459356]
	I0701 12:26:05.023779  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 12:26:05.041137  653531 logs.go:276] 2 containers: [68c63c4abd01 dff0f4abea41]
	I0701 12:26:05.041235  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 12:26:05.059910  653531 logs.go:276] 0 containers: []
	W0701 12:26:05.059939  653531 logs.go:278] No container was found matching "coredns"
	I0701 12:26:05.060005  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 12:26:05.076858  653531 logs.go:276] 2 containers: [279483668a9c 58811626a0de]
	I0701 12:26:05.076953  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 12:26:05.091973  653531 logs.go:276] 2 containers: [156169e4ac3c 2885f7cf6f93]
	I0701 12:26:05.092072  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 12:26:05.109350  653531 logs.go:276] 2 containers: [a72e102b5bf7 a1160a455902]
	I0701 12:26:05.109445  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 12:26:05.126947  653531 logs.go:276] 2 containers: [c8184f4bc096 8c3a5ac0cf85]
	I0701 12:26:05.127013  653531 logs.go:123] Gathering logs for container status ...
	I0701 12:26:05.127032  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 12:26:05.172758  653531 logs.go:123] Gathering logs for describe nodes ...
	I0701 12:26:05.172800  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 12:26:05.530082  653531 logs.go:123] Gathering logs for kube-apiserver [f615f587cb12] ...
	I0701 12:26:05.530114  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f615f587cb12"
	I0701 12:26:05.563833  653531 logs.go:123] Gathering logs for kube-apiserver [c36c1d459356] ...
	I0701 12:26:05.563866  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36c1d459356"
	I0701 12:26:05.633259  653531 logs.go:123] Gathering logs for etcd [dff0f4abea41] ...
	I0701 12:26:05.633305  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dff0f4abea41"
	I0701 12:26:05.672146  653531 logs.go:123] Gathering logs for kube-scheduler [58811626a0de] ...
	I0701 12:26:05.672187  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58811626a0de"
	I0701 12:26:05.693508  653531 logs.go:123] Gathering logs for kube-proxy [2885f7cf6f93] ...
	I0701 12:26:05.693553  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2885f7cf6f93"
	I0701 12:26:05.717857  653531 logs.go:123] Gathering logs for Docker ...
	I0701 12:26:05.717889  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 12:26:05.766696  653531 logs.go:123] Gathering logs for dmesg ...
	I0701 12:26:05.766736  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 12:26:05.781553  653531 logs.go:123] Gathering logs for kube-proxy [156169e4ac3c] ...
	I0701 12:26:05.781587  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 156169e4ac3c"
	I0701 12:26:05.807724  653531 logs.go:123] Gathering logs for kindnet [8c3a5ac0cf85] ...
	I0701 12:26:05.807758  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3a5ac0cf85"
	I0701 12:26:05.830042  653531 logs.go:123] Gathering logs for etcd [68c63c4abd01] ...
	I0701 12:26:05.830072  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68c63c4abd01"
	I0701 12:26:05.862525  653531 logs.go:123] Gathering logs for kube-controller-manager [a72e102b5bf7] ...
	I0701 12:26:05.862568  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a72e102b5bf7"
	I0701 12:26:05.901329  653531 logs.go:123] Gathering logs for kube-controller-manager [a1160a455902] ...
	I0701 12:26:05.901370  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1160a455902"
	I0701 12:26:05.942097  653531 logs.go:123] Gathering logs for kindnet [c8184f4bc096] ...
	I0701 12:26:05.942139  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8184f4bc096"
	I0701 12:26:05.964792  653531 logs.go:123] Gathering logs for kubelet ...
	I0701 12:26:05.964829  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 12:26:06.027347  653531 logs.go:123] Gathering logs for kube-scheduler [279483668a9c] ...
	I0701 12:26:06.027394  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 279483668a9c"
	I0701 12:26:08.550396  653531 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 12:26:08.565837  653531 api_server.go:72] duration metric: took 1m18.553699317s to wait for apiserver process to appear ...
	I0701 12:26:08.565866  653531 api_server.go:88] waiting for apiserver healthz status ...
	I0701 12:26:08.565941  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 12:26:08.584274  653531 logs.go:276] 2 containers: [f615f587cb12 c36c1d459356]
	I0701 12:26:08.584349  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 12:26:08.601551  653531 logs.go:276] 2 containers: [68c63c4abd01 dff0f4abea41]
	I0701 12:26:08.601633  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 12:26:08.619657  653531 logs.go:276] 0 containers: []
	W0701 12:26:08.619687  653531 logs.go:278] No container was found matching "coredns"
	I0701 12:26:08.619744  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 12:26:08.637393  653531 logs.go:276] 2 containers: [279483668a9c 58811626a0de]
	I0701 12:26:08.637473  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 12:26:08.662222  653531 logs.go:276] 2 containers: [156169e4ac3c 2885f7cf6f93]
	I0701 12:26:08.662307  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 12:26:08.678542  653531 logs.go:276] 2 containers: [a72e102b5bf7 a1160a455902]
	I0701 12:26:08.678649  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 12:26:08.698914  653531 logs.go:276] 2 containers: [c8184f4bc096 8c3a5ac0cf85]
	I0701 12:26:08.698956  653531 logs.go:123] Gathering logs for kube-scheduler [58811626a0de] ...
	I0701 12:26:08.698968  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58811626a0de"
	I0701 12:26:08.722744  653531 logs.go:123] Gathering logs for kube-controller-manager [a72e102b5bf7] ...
	I0701 12:26:08.722780  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a72e102b5bf7"
	I0701 12:26:08.767782  653531 logs.go:123] Gathering logs for kindnet [8c3a5ac0cf85] ...
	I0701 12:26:08.767825  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3a5ac0cf85"
	I0701 12:26:08.792700  653531 logs.go:123] Gathering logs for Docker ...
	I0701 12:26:08.792731  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 12:26:08.841902  653531 logs.go:123] Gathering logs for container status ...
	I0701 12:26:08.841943  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 12:26:08.885531  653531 logs.go:123] Gathering logs for kubelet ...
	I0701 12:26:08.885563  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 12:26:08.940130  653531 logs.go:123] Gathering logs for etcd [68c63c4abd01] ...
	I0701 12:26:08.940179  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68c63c4abd01"
	I0701 12:26:08.973841  653531 logs.go:123] Gathering logs for etcd [dff0f4abea41] ...
	I0701 12:26:08.973883  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dff0f4abea41"
	I0701 12:26:09.008785  653531 logs.go:123] Gathering logs for kube-apiserver [f615f587cb12] ...
	I0701 12:26:09.008824  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f615f587cb12"
	I0701 12:26:09.040512  653531 logs.go:123] Gathering logs for kube-apiserver [c36c1d459356] ...
	I0701 12:26:09.040568  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36c1d459356"
	I0701 12:26:09.135818  653531 logs.go:123] Gathering logs for kube-scheduler [279483668a9c] ...
	I0701 12:26:09.135876  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 279483668a9c"
	I0701 12:26:09.158758  653531 logs.go:123] Gathering logs for describe nodes ...
	I0701 12:26:09.158802  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 12:26:09.415637  653531 logs.go:123] Gathering logs for kube-proxy [2885f7cf6f93] ...
	I0701 12:26:09.415685  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2885f7cf6f93"
	I0701 12:26:09.438064  653531 logs.go:123] Gathering logs for kindnet [c8184f4bc096] ...
	I0701 12:26:09.438104  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8184f4bc096"
	I0701 12:26:09.463612  653531 logs.go:123] Gathering logs for dmesg ...
	I0701 12:26:09.463666  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 12:26:09.477906  653531 logs.go:123] Gathering logs for kube-proxy [156169e4ac3c] ...
	I0701 12:26:09.477936  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 156169e4ac3c"
	I0701 12:26:09.501662  653531 logs.go:123] Gathering logs for kube-controller-manager [a1160a455902] ...
	I0701 12:26:09.501704  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1160a455902"
	I0701 12:26:12.049246  653531 api_server.go:253] Checking apiserver healthz at https://192.168.39.16:8443/healthz ...
	I0701 12:26:12.055739  653531 api_server.go:279] https://192.168.39.16:8443/healthz returned 200:
	ok
	I0701 12:26:12.055824  653531 round_trippers.go:463] GET https://192.168.39.16:8443/version
	I0701 12:26:12.055829  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:12.055837  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:12.055841  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:12.056892  653531 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0701 12:26:12.057034  653531 api_server.go:141] control plane version: v1.30.2
	I0701 12:26:12.057055  653531 api_server.go:131] duration metric: took 3.491183076s to wait for apiserver health ...
	I0701 12:26:12.057064  653531 system_pods.go:43] waiting for kube-system pods to appear ...
	I0701 12:26:12.057160  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 12:26:12.074309  653531 logs.go:276] 2 containers: [f615f587cb12 c36c1d459356]
	I0701 12:26:12.074405  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 12:26:12.100040  653531 logs.go:276] 2 containers: [68c63c4abd01 dff0f4abea41]
	I0701 12:26:12.100116  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 12:26:12.119321  653531 logs.go:276] 0 containers: []
	W0701 12:26:12.119352  653531 logs.go:278] No container was found matching "coredns"
	I0701 12:26:12.119406  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 12:26:12.137547  653531 logs.go:276] 2 containers: [279483668a9c 58811626a0de]
	I0701 12:26:12.137660  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 12:26:12.157321  653531 logs.go:276] 2 containers: [156169e4ac3c 2885f7cf6f93]
	I0701 12:26:12.157417  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 12:26:12.182117  653531 logs.go:276] 2 containers: [a72e102b5bf7 a1160a455902]
	I0701 12:26:12.182204  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 12:26:12.204201  653531 logs.go:276] 2 containers: [c8184f4bc096 8c3a5ac0cf85]
	I0701 12:26:12.204247  653531 logs.go:123] Gathering logs for kube-proxy [2885f7cf6f93] ...
	I0701 12:26:12.204260  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2885f7cf6f93"
	I0701 12:26:12.228173  653531 logs.go:123] Gathering logs for kube-controller-manager [a72e102b5bf7] ...
	I0701 12:26:12.228206  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a72e102b5bf7"
	I0701 12:26:12.267264  653531 logs.go:123] Gathering logs for kindnet [c8184f4bc096] ...
	I0701 12:26:12.267309  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8184f4bc096"
	I0701 12:26:12.294504  653531 logs.go:123] Gathering logs for Docker ...
	I0701 12:26:12.294535  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 12:26:12.344610  653531 logs.go:123] Gathering logs for describe nodes ...
	I0701 12:26:12.344649  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 12:26:12.593887  653531 logs.go:123] Gathering logs for kube-apiserver [c36c1d459356] ...
	I0701 12:26:12.593927  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36c1d459356"
	I0701 12:26:12.665033  653531 logs.go:123] Gathering logs for kube-proxy [156169e4ac3c] ...
	I0701 12:26:12.665082  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 156169e4ac3c"
	I0701 12:26:12.687103  653531 logs.go:123] Gathering logs for container status ...
	I0701 12:26:12.687142  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 12:26:12.735851  653531 logs.go:123] Gathering logs for kubelet ...
	I0701 12:26:12.735886  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 12:26:12.793127  653531 logs.go:123] Gathering logs for kube-apiserver [f615f587cb12] ...
	I0701 12:26:12.793168  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f615f587cb12"
	I0701 12:26:12.823004  653531 logs.go:123] Gathering logs for kindnet [8c3a5ac0cf85] ...
	I0701 12:26:12.823037  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3a5ac0cf85"
	I0701 12:26:12.862610  653531 logs.go:123] Gathering logs for kube-scheduler [279483668a9c] ...
	I0701 12:26:12.862650  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 279483668a9c"
	I0701 12:26:12.883651  653531 logs.go:123] Gathering logs for kube-scheduler [58811626a0de] ...
	I0701 12:26:12.883685  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58811626a0de"
	I0701 12:26:12.905351  653531 logs.go:123] Gathering logs for kube-controller-manager [a1160a455902] ...
	I0701 12:26:12.905388  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1160a455902"
	I0701 12:26:12.938388  653531 logs.go:123] Gathering logs for dmesg ...
	I0701 12:26:12.938427  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 12:26:12.955609  653531 logs.go:123] Gathering logs for etcd [68c63c4abd01] ...
	I0701 12:26:12.955647  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68c63c4abd01"
	I0701 12:26:12.987593  653531 logs.go:123] Gathering logs for etcd [dff0f4abea41] ...
	I0701 12:26:12.987626  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dff0f4abea41"
	I0701 12:26:15.520590  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods
	I0701 12:26:15.520616  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:15.520625  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:15.520628  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:15.528299  653531 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0701 12:26:15.535569  653531 system_pods.go:59] 26 kube-system pods found
	I0701 12:26:15.535603  653531 system_pods.go:61] "coredns-7db6d8ff4d-nk4lf" [c03dd635-a82d-4f18-bd72-ec575f91867e] Running
	I0701 12:26:15.535608  653531 system_pods.go:61] "coredns-7db6d8ff4d-p4rtz" [267efba7-bf34-48d5-ab15-5bda45ff2f4f] Running
	I0701 12:26:15.535613  653531 system_pods.go:61] "etcd-ha-735960" [4b98745c-292f-42b5-977c-69c50fd241f1] Running
	I0701 12:26:15.535617  653531 system_pods.go:61] "etcd-ha-735960-m02" [fed8cdfa-8428-47e0-84ef-05297ad232f8] Running
	I0701 12:26:15.535622  653531 system_pods.go:61] "etcd-ha-735960-m03" [50b07bc3-ff6b-487d-8654-901d96892868] Running
	I0701 12:26:15.535625  653531 system_pods.go:61] "kindnet-2424m" [aa18d5dd-f6eb-4f04-a61e-b0b257e214af] Running
	I0701 12:26:15.535628  653531 system_pods.go:61] "kindnet-6gx8s" [7f46a773-a075-476c-9e54-89f125b4b57a] Running
	I0701 12:26:15.535631  653531 system_pods.go:61] "kindnet-7f6hm" [a8c302b4-1163-4d4f-bfe3-4fd3b5d23cf0] Running
	I0701 12:26:15.535633  653531 system_pods.go:61] "kindnet-bztzv" [7afa0e45-3d10-40bc-b422-7005a3ca9d3a] Running
	I0701 12:26:15.535636  653531 system_pods.go:61] "kube-apiserver-ha-735960" [ad041aaa-465a-4d8a-a8dc-b7665e1d587d] Running
	I0701 12:26:15.535640  653531 system_pods.go:61] "kube-apiserver-ha-735960-m02" [ba28f48e-1c18-47e3-ab11-a9b5588c5c32] Running
	I0701 12:26:15.535642  653531 system_pods.go:61] "kube-apiserver-ha-735960-m03" [baafa3bf-78ee-4269-9591-b0440927e055] Running
	I0701 12:26:15.535646  653531 system_pods.go:61] "kube-controller-manager-ha-735960" [3f0f0cf5-329d-47bc-b922-7583902e2607] Running
	I0701 12:26:15.535649  653531 system_pods.go:61] "kube-controller-manager-ha-735960-m02" [258fde18-ac5c-4446-842b-9465529b154c] Running
	I0701 12:26:15.535652  653531 system_pods.go:61] "kube-controller-manager-ha-735960-m03" [79acc56b-a9e0-4d4b-bc64-1a3a36ddf051] Running
	I0701 12:26:15.535655  653531 system_pods.go:61] "kube-proxy-25ssf" [11f0dc23-ab9d-4d39-988d-4c44dfde86cd] Running
	I0701 12:26:15.535658  653531 system_pods.go:61] "kube-proxy-776rt" [5666dac7-924e-4429-bd1d-a1a5647cc611] Running
	I0701 12:26:15.535661  653531 system_pods.go:61] "kube-proxy-b6knb" [eb36e930-5799-4ff7-821a-ccb22303cd1b] Running
	I0701 12:26:15.535664  653531 system_pods.go:61] "kube-proxy-lphzn" [0761a7a6-740e-4cde-9ab5-e02e8d417907] Running
	I0701 12:26:15.535667  653531 system_pods.go:61] "kube-scheduler-ha-735960" [c624cf42-a7d6-4aaf-859d-1aeaf29f9acb] Running
	I0701 12:26:15.535670  653531 system_pods.go:61] "kube-scheduler-ha-735960-m02" [7de78af7-2d79-46dc-bd34-f221d79fde06] Running
	I0701 12:26:15.535673  653531 system_pods.go:61] "kube-scheduler-ha-735960-m03" [9f9a2030-9332-44af-b8dc-3b4609e53f91] Running
	I0701 12:26:15.535676  653531 system_pods.go:61] "kube-vip-ha-735960" [4299679a-c145-4f4f-8ec6-3cd468b98ef1] Running
	I0701 12:26:15.535679  653531 system_pods.go:61] "kube-vip-ha-735960-m02" [1c9b13e1-515c-43c0-8d99-5ad1c1807727] Running
	I0701 12:26:15.535684  653531 system_pods.go:61] "kube-vip-ha-735960-m03" [7069ea7c-5461-4fe6-a969-97fe33396ebb] Running
	I0701 12:26:15.535688  653531 system_pods.go:61] "storage-provisioner" [f5c4f7f9-d648-4019-a5ea-6ce59f6c5663] Running
	I0701 12:26:15.535693  653531 system_pods.go:74] duration metric: took 3.47862483s to wait for pod list to return data ...
	I0701 12:26:15.535701  653531 default_sa.go:34] waiting for default service account to be created ...
	I0701 12:26:15.535798  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/default/serviceaccounts
	I0701 12:26:15.535809  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:15.535816  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:15.535820  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:15.539198  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:15.539410  653531 default_sa.go:45] found service account: "default"
	I0701 12:26:15.539425  653531 default_sa.go:55] duration metric: took 3.71568ms for default service account to be created ...
	I0701 12:26:15.539433  653531 system_pods.go:116] waiting for k8s-apps to be running ...
	I0701 12:26:15.539483  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods
	I0701 12:26:15.539490  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:15.539497  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:15.539503  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:15.547242  653531 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0701 12:26:15.553992  653531 system_pods.go:86] 26 kube-system pods found
	I0701 12:26:15.554026  653531 system_pods.go:89] "coredns-7db6d8ff4d-nk4lf" [c03dd635-a82d-4f18-bd72-ec575f91867e] Running
	I0701 12:26:15.554034  653531 system_pods.go:89] "coredns-7db6d8ff4d-p4rtz" [267efba7-bf34-48d5-ab15-5bda45ff2f4f] Running
	I0701 12:26:15.554040  653531 system_pods.go:89] "etcd-ha-735960" [4b98745c-292f-42b5-977c-69c50fd241f1] Running
	I0701 12:26:15.554046  653531 system_pods.go:89] "etcd-ha-735960-m02" [fed8cdfa-8428-47e0-84ef-05297ad232f8] Running
	I0701 12:26:15.554050  653531 system_pods.go:89] "etcd-ha-735960-m03" [50b07bc3-ff6b-487d-8654-901d96892868] Running
	I0701 12:26:15.554056  653531 system_pods.go:89] "kindnet-2424m" [aa18d5dd-f6eb-4f04-a61e-b0b257e214af] Running
	I0701 12:26:15.554062  653531 system_pods.go:89] "kindnet-6gx8s" [7f46a773-a075-476c-9e54-89f125b4b57a] Running
	I0701 12:26:15.554069  653531 system_pods.go:89] "kindnet-7f6hm" [a8c302b4-1163-4d4f-bfe3-4fd3b5d23cf0] Running
	I0701 12:26:15.554075  653531 system_pods.go:89] "kindnet-bztzv" [7afa0e45-3d10-40bc-b422-7005a3ca9d3a] Running
	I0701 12:26:15.554081  653531 system_pods.go:89] "kube-apiserver-ha-735960" [ad041aaa-465a-4d8a-a8dc-b7665e1d587d] Running
	I0701 12:26:15.554088  653531 system_pods.go:89] "kube-apiserver-ha-735960-m02" [ba28f48e-1c18-47e3-ab11-a9b5588c5c32] Running
	I0701 12:26:15.554099  653531 system_pods.go:89] "kube-apiserver-ha-735960-m03" [baafa3bf-78ee-4269-9591-b0440927e055] Running
	I0701 12:26:15.554107  653531 system_pods.go:89] "kube-controller-manager-ha-735960" [3f0f0cf5-329d-47bc-b922-7583902e2607] Running
	I0701 12:26:15.554115  653531 system_pods.go:89] "kube-controller-manager-ha-735960-m02" [258fde18-ac5c-4446-842b-9465529b154c] Running
	I0701 12:26:15.554123  653531 system_pods.go:89] "kube-controller-manager-ha-735960-m03" [79acc56b-a9e0-4d4b-bc64-1a3a36ddf051] Running
	I0701 12:26:15.554131  653531 system_pods.go:89] "kube-proxy-25ssf" [11f0dc23-ab9d-4d39-988d-4c44dfde86cd] Running
	I0701 12:26:15.554140  653531 system_pods.go:89] "kube-proxy-776rt" [5666dac7-924e-4429-bd1d-a1a5647cc611] Running
	I0701 12:26:15.554148  653531 system_pods.go:89] "kube-proxy-b6knb" [eb36e930-5799-4ff7-821a-ccb22303cd1b] Running
	I0701 12:26:15.554163  653531 system_pods.go:89] "kube-proxy-lphzn" [0761a7a6-740e-4cde-9ab5-e02e8d417907] Running
	I0701 12:26:15.554170  653531 system_pods.go:89] "kube-scheduler-ha-735960" [c624cf42-a7d6-4aaf-859d-1aeaf29f9acb] Running
	I0701 12:26:15.554176  653531 system_pods.go:89] "kube-scheduler-ha-735960-m02" [7de78af7-2d79-46dc-bd34-f221d79fde06] Running
	I0701 12:26:15.554183  653531 system_pods.go:89] "kube-scheduler-ha-735960-m03" [9f9a2030-9332-44af-b8dc-3b4609e53f91] Running
	I0701 12:26:15.554192  653531 system_pods.go:89] "kube-vip-ha-735960" [4299679a-c145-4f4f-8ec6-3cd468b98ef1] Running
	I0701 12:26:15.554199  653531 system_pods.go:89] "kube-vip-ha-735960-m02" [1c9b13e1-515c-43c0-8d99-5ad1c1807727] Running
	I0701 12:26:15.554207  653531 system_pods.go:89] "kube-vip-ha-735960-m03" [7069ea7c-5461-4fe6-a969-97fe33396ebb] Running
	I0701 12:26:15.554216  653531 system_pods.go:89] "storage-provisioner" [f5c4f7f9-d648-4019-a5ea-6ce59f6c5663] Running
	I0701 12:26:15.554229  653531 system_pods.go:126] duration metric: took 14.787055ms to wait for k8s-apps to be running ...
	I0701 12:26:15.554241  653531 system_svc.go:44] waiting for kubelet service to be running ....
	I0701 12:26:15.554296  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 12:26:15.567890  653531 system_svc.go:56] duration metric: took 13.638054ms WaitForService to wait for kubelet
	I0701 12:26:15.567925  653531 kubeadm.go:576] duration metric: took 1m25.555790211s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 12:26:15.567951  653531 node_conditions.go:102] verifying NodePressure condition ...
	I0701 12:26:15.568047  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes
	I0701 12:26:15.568057  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:15.568067  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:15.568074  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:15.575311  653531 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0701 12:26:15.577277  653531 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0701 12:26:15.577310  653531 node_conditions.go:123] node cpu capacity is 2
	I0701 12:26:15.577328  653531 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0701 12:26:15.577334  653531 node_conditions.go:123] node cpu capacity is 2
	I0701 12:26:15.577339  653531 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0701 12:26:15.577343  653531 node_conditions.go:123] node cpu capacity is 2
	I0701 12:26:15.577348  653531 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0701 12:26:15.577352  653531 node_conditions.go:123] node cpu capacity is 2
	I0701 12:26:15.577358  653531 node_conditions.go:105] duration metric: took 9.401356ms to run NodePressure ...
	I0701 12:26:15.577372  653531 start.go:240] waiting for startup goroutines ...
	I0701 12:26:15.577418  653531 start.go:254] writing updated cluster config ...
	I0701 12:26:15.579876  653531 out.go:177] 
	I0701 12:26:15.581466  653531 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:26:15.581562  653531 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
	I0701 12:26:15.583519  653531 out.go:177] * Starting "ha-735960-m03" control-plane node in "ha-735960" cluster
	I0701 12:26:15.584707  653531 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 12:26:15.584732  653531 cache.go:56] Caching tarball of preloaded images
	I0701 12:26:15.584831  653531 preload.go:173] Found /home/jenkins/minikube-integration/19166-630650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0701 12:26:15.584841  653531 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0701 12:26:15.584932  653531 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
	I0701 12:26:15.585716  653531 start.go:360] acquireMachinesLock for ha-735960-m03: {Name:mk43a6c0c0c15a237623df377ad65b82186c0ad8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:26:15.585768  653531 start.go:364] duration metric: took 28.47µs to acquireMachinesLock for "ha-735960-m03"
	I0701 12:26:15.585785  653531 start.go:96] Skipping create...Using existing machine configuration
	I0701 12:26:15.585798  653531 fix.go:54] fixHost starting: m03
	I0701 12:26:15.586107  653531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:26:15.586143  653531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:26:15.603500  653531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43455
	I0701 12:26:15.603962  653531 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:26:15.604555  653531 main.go:141] libmachine: Using API Version  1
	I0701 12:26:15.604579  653531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:26:15.604930  653531 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:26:15.605195  653531 main.go:141] libmachine: (ha-735960-m03) Calling .DriverName
	I0701 12:26:15.605384  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetState
	I0701 12:26:15.607018  653531 fix.go:112] recreateIfNeeded on ha-735960-m03: state=Stopped err=<nil>
	I0701 12:26:15.607042  653531 main.go:141] libmachine: (ha-735960-m03) Calling .DriverName
	W0701 12:26:15.607213  653531 fix.go:138] unexpected machine state, will restart: <nil>
	I0701 12:26:15.609349  653531 out.go:177] * Restarting existing kvm2 VM for "ha-735960-m03" ...
	I0701 12:26:15.610714  653531 main.go:141] libmachine: (ha-735960-m03) Calling .Start
	I0701 12:26:15.610921  653531 main.go:141] libmachine: (ha-735960-m03) Ensuring networks are active...
	I0701 12:26:15.611706  653531 main.go:141] libmachine: (ha-735960-m03) Ensuring network default is active
	I0701 12:26:15.612087  653531 main.go:141] libmachine: (ha-735960-m03) Ensuring network mk-ha-735960 is active
	I0701 12:26:15.612457  653531 main.go:141] libmachine: (ha-735960-m03) Getting domain xml...
	I0701 12:26:15.613082  653531 main.go:141] libmachine: (ha-735960-m03) Creating domain...
	I0701 12:26:16.855928  653531 main.go:141] libmachine: (ha-735960-m03) Waiting to get IP...
	I0701 12:26:16.856767  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:16.857131  653531 main.go:141] libmachine: (ha-735960-m03) DBG | unable to find current IP address of domain ha-735960-m03 in network mk-ha-735960
	I0701 12:26:16.857182  653531 main.go:141] libmachine: (ha-735960-m03) DBG | I0701 12:26:16.857114  654164 retry.go:31] will retry after 232.687433ms: waiting for machine to come up
	I0701 12:26:17.091660  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:17.092187  653531 main.go:141] libmachine: (ha-735960-m03) DBG | unable to find current IP address of domain ha-735960-m03 in network mk-ha-735960
	I0701 12:26:17.092229  653531 main.go:141] libmachine: (ha-735960-m03) DBG | I0701 12:26:17.092112  654164 retry.go:31] will retry after 320.051772ms: waiting for machine to come up
	I0701 12:26:17.413613  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:17.414125  653531 main.go:141] libmachine: (ha-735960-m03) DBG | unable to find current IP address of domain ha-735960-m03 in network mk-ha-735960
	I0701 12:26:17.414157  653531 main.go:141] libmachine: (ha-735960-m03) DBG | I0701 12:26:17.414063  654164 retry.go:31] will retry after 415.446228ms: waiting for machine to come up
	I0701 12:26:17.830725  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:17.831413  653531 main.go:141] libmachine: (ha-735960-m03) DBG | unable to find current IP address of domain ha-735960-m03 in network mk-ha-735960
	I0701 12:26:17.831445  653531 main.go:141] libmachine: (ha-735960-m03) DBG | I0701 12:26:17.831349  654164 retry.go:31] will retry after 522.707968ms: waiting for machine to come up
	I0701 12:26:18.356092  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:18.356521  653531 main.go:141] libmachine: (ha-735960-m03) DBG | unable to find current IP address of domain ha-735960-m03 in network mk-ha-735960
	I0701 12:26:18.356543  653531 main.go:141] libmachine: (ha-735960-m03) DBG | I0701 12:26:18.356485  654164 retry.go:31] will retry after 572.783424ms: waiting for machine to come up
	I0701 12:26:18.931377  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:18.931831  653531 main.go:141] libmachine: (ha-735960-m03) DBG | unable to find current IP address of domain ha-735960-m03 in network mk-ha-735960
	I0701 12:26:18.931856  653531 main.go:141] libmachine: (ha-735960-m03) DBG | I0701 12:26:18.931778  654164 retry.go:31] will retry after 662.269299ms: waiting for machine to come up
	I0701 12:26:19.595406  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:19.595831  653531 main.go:141] libmachine: (ha-735960-m03) DBG | unable to find current IP address of domain ha-735960-m03 in network mk-ha-735960
	I0701 12:26:19.595862  653531 main.go:141] libmachine: (ha-735960-m03) DBG | I0701 12:26:19.595779  654164 retry.go:31] will retry after 965.977644ms: waiting for machine to come up
	I0701 12:26:20.562930  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:20.563372  653531 main.go:141] libmachine: (ha-735960-m03) DBG | unable to find current IP address of domain ha-735960-m03 in network mk-ha-735960
	I0701 12:26:20.563432  653531 main.go:141] libmachine: (ha-735960-m03) DBG | I0701 12:26:20.563328  654164 retry.go:31] will retry after 1.166893605s: waiting for machine to come up
	I0701 12:26:21.731632  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:21.732082  653531 main.go:141] libmachine: (ha-735960-m03) DBG | unable to find current IP address of domain ha-735960-m03 in network mk-ha-735960
	I0701 12:26:21.732114  653531 main.go:141] libmachine: (ha-735960-m03) DBG | I0701 12:26:21.732040  654164 retry.go:31] will retry after 1.800222328s: waiting for machine to come up
	I0701 12:26:23.534948  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:23.535342  653531 main.go:141] libmachine: (ha-735960-m03) DBG | unable to find current IP address of domain ha-735960-m03 in network mk-ha-735960
	I0701 12:26:23.535372  653531 main.go:141] libmachine: (ha-735960-m03) DBG | I0701 12:26:23.535277  654164 retry.go:31] will retry after 1.820829305s: waiting for machine to come up
	I0701 12:26:25.357271  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:25.357677  653531 main.go:141] libmachine: (ha-735960-m03) DBG | unable to find current IP address of domain ha-735960-m03 in network mk-ha-735960
	I0701 12:26:25.357701  653531 main.go:141] libmachine: (ha-735960-m03) DBG | I0701 12:26:25.357630  654164 retry.go:31] will retry after 1.816274117s: waiting for machine to come up
	I0701 12:26:27.176155  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:27.176621  653531 main.go:141] libmachine: (ha-735960-m03) DBG | unable to find current IP address of domain ha-735960-m03 in network mk-ha-735960
	I0701 12:26:27.176653  653531 main.go:141] libmachine: (ha-735960-m03) DBG | I0701 12:26:27.176598  654164 retry.go:31] will retry after 2.782602178s: waiting for machine to come up
	I0701 12:26:29.960991  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:29.961388  653531 main.go:141] libmachine: (ha-735960-m03) DBG | unable to find current IP address of domain ha-735960-m03 in network mk-ha-735960
	I0701 12:26:29.961421  653531 main.go:141] libmachine: (ha-735960-m03) DBG | I0701 12:26:29.961334  654164 retry.go:31] will retry after 3.816886888s: waiting for machine to come up
	I0701 12:26:33.779810  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:33.780404  653531 main.go:141] libmachine: (ha-735960-m03) Found IP for machine: 192.168.39.97
	I0701 12:26:33.780436  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has current primary IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:33.780448  653531 main.go:141] libmachine: (ha-735960-m03) Reserving static IP address...
	I0701 12:26:33.780953  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "ha-735960-m03", mac: "52:54:00:93:88:f2", ip: "192.168.39.97"} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:33.780975  653531 main.go:141] libmachine: (ha-735960-m03) DBG | skip adding static IP to network mk-ha-735960 - found existing host DHCP lease matching {name: "ha-735960-m03", mac: "52:54:00:93:88:f2", ip: "192.168.39.97"}
	I0701 12:26:33.780986  653531 main.go:141] libmachine: (ha-735960-m03) Reserved static IP address: 192.168.39.97
	I0701 12:26:33.780995  653531 main.go:141] libmachine: (ha-735960-m03) Waiting for SSH to be available...
	I0701 12:26:33.781005  653531 main.go:141] libmachine: (ha-735960-m03) DBG | Getting to WaitForSSH function...
	I0701 12:26:33.783239  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:33.783609  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:33.783636  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:33.783742  653531 main.go:141] libmachine: (ha-735960-m03) DBG | Using SSH client type: external
	I0701 12:26:33.783770  653531 main.go:141] libmachine: (ha-735960-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m03/id_rsa (-rw-------)
	I0701 12:26:33.783810  653531 main.go:141] libmachine: (ha-735960-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.97 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0701 12:26:33.783825  653531 main.go:141] libmachine: (ha-735960-m03) DBG | About to run SSH command:
	I0701 12:26:33.783839  653531 main.go:141] libmachine: (ha-735960-m03) DBG | exit 0
	I0701 12:26:33.906528  653531 main.go:141] libmachine: (ha-735960-m03) DBG | SSH cmd err, output: <nil>: 
	I0701 12:26:33.906854  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetConfigRaw
	I0701 12:26:33.907659  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetIP
	I0701 12:26:33.910504  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:33.910919  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:33.910952  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:33.911199  653531 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
	I0701 12:26:33.911468  653531 machine.go:94] provisionDockerMachine start ...
	I0701 12:26:33.911493  653531 main.go:141] libmachine: (ha-735960-m03) Calling .DriverName
	I0701 12:26:33.911726  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHHostname
	I0701 12:26:33.913742  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:33.914049  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:33.914079  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:33.914213  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHPort
	I0701 12:26:33.914440  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:33.914614  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:33.914781  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHUsername
	I0701 12:26:33.914952  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:26:33.915169  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0701 12:26:33.915186  653531 main.go:141] libmachine: About to run SSH command:
	hostname
	I0701 12:26:34.022720  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0701 12:26:34.022751  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetMachineName
	I0701 12:26:34.023048  653531 buildroot.go:166] provisioning hostname "ha-735960-m03"
	I0701 12:26:34.023086  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetMachineName
	I0701 12:26:34.023302  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHHostname
	I0701 12:26:34.026253  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:34.026699  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:34.026731  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:34.026891  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHPort
	I0701 12:26:34.027100  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:34.027330  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:34.027468  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHUsername
	I0701 12:26:34.027637  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:26:34.027853  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0701 12:26:34.027872  653531 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-735960-m03 && echo "ha-735960-m03" | sudo tee /etc/hostname
	I0701 12:26:34.143884  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-735960-m03
	
	I0701 12:26:34.143919  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHHostname
	I0701 12:26:34.146876  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:34.147233  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:34.147259  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:34.147410  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHPort
	I0701 12:26:34.147595  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:34.147764  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:34.147906  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHUsername
	I0701 12:26:34.148107  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:26:34.148271  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0701 12:26:34.148287  653531 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-735960-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-735960-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-735960-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0701 12:26:34.259290  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0701 12:26:34.259326  653531 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19166-630650/.minikube CaCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19166-630650/.minikube}
	I0701 12:26:34.259348  653531 buildroot.go:174] setting up certificates
	I0701 12:26:34.259361  653531 provision.go:84] configureAuth start
	I0701 12:26:34.259373  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetMachineName
	I0701 12:26:34.259700  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetIP
	I0701 12:26:34.262660  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:34.263056  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:34.263088  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:34.263229  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHHostname
	I0701 12:26:34.265709  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:34.266104  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:34.266129  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:34.266291  653531 provision.go:143] copyHostCerts
	I0701 12:26:34.266320  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem
	I0701 12:26:34.266385  653531 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem, removing ...
	I0701 12:26:34.266399  653531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem
	I0701 12:26:34.266510  653531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem (1078 bytes)
	I0701 12:26:34.266616  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem
	I0701 12:26:34.266642  653531 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem, removing ...
	I0701 12:26:34.266651  653531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem
	I0701 12:26:34.266687  653531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem (1123 bytes)
	I0701 12:26:34.266758  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem
	I0701 12:26:34.266785  653531 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem, removing ...
	I0701 12:26:34.266794  653531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem
	I0701 12:26:34.266828  653531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem (1675 bytes)
	I0701 12:26:34.266895  653531 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem org=jenkins.ha-735960-m03 san=[127.0.0.1 192.168.39.97 ha-735960-m03 localhost minikube]
	I0701 12:26:34.565581  653531 provision.go:177] copyRemoteCerts
	I0701 12:26:34.565649  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0701 12:26:34.565676  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHHostname
	I0701 12:26:34.568539  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:34.568839  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:34.568870  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:34.569025  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHPort
	I0701 12:26:34.569261  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:34.569428  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHUsername
	I0701 12:26:34.569588  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m03/id_rsa Username:docker}
	I0701 12:26:34.652136  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0701 12:26:34.652230  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0701 12:26:34.676227  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0701 12:26:34.676305  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0701 12:26:34.699234  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0701 12:26:34.699313  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0701 12:26:34.721885  653531 provision.go:87] duration metric: took 462.509686ms to configureAuth
	I0701 12:26:34.721915  653531 buildroot.go:189] setting minikube options for container-runtime
	I0701 12:26:34.722137  653531 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:26:34.722181  653531 main.go:141] libmachine: (ha-735960-m03) Calling .DriverName
	I0701 12:26:34.722494  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHHostname
	I0701 12:26:34.725227  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:34.725601  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:34.725629  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:34.725789  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHPort
	I0701 12:26:34.725994  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:34.726175  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:34.726384  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHUsername
	I0701 12:26:34.726572  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:26:34.726794  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0701 12:26:34.726809  653531 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0701 12:26:34.831674  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0701 12:26:34.831699  653531 buildroot.go:70] root file system type: tmpfs
	I0701 12:26:34.831846  653531 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0701 12:26:34.831923  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHHostname
	I0701 12:26:34.835107  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:34.835603  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:34.835626  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:34.835928  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHPort
	I0701 12:26:34.836184  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:34.836401  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:34.836577  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHUsername
	I0701 12:26:34.836754  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:26:34.836963  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0701 12:26:34.837056  653531 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.39.16"
	Environment="NO_PROXY=192.168.39.16,192.168.39.86"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0701 12:26:34.951789  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.39.16
	Environment=NO_PROXY=192.168.39.16,192.168.39.86
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0701 12:26:34.951830  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHHostname
	I0701 12:26:34.954854  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:34.955349  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:34.955376  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:34.955552  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHPort
	I0701 12:26:34.955761  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:34.955952  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:34.956104  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHUsername
	I0701 12:26:34.956269  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:26:34.956436  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0701 12:26:34.956451  653531 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0701 12:26:36.820196  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0701 12:26:36.820235  653531 machine.go:97] duration metric: took 2.908749821s to provisionDockerMachine
	I0701 12:26:36.820254  653531 start.go:293] postStartSetup for "ha-735960-m03" (driver="kvm2")
	I0701 12:26:36.820269  653531 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0701 12:26:36.820322  653531 main.go:141] libmachine: (ha-735960-m03) Calling .DriverName
	I0701 12:26:36.820717  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0701 12:26:36.820758  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHHostname
	I0701 12:26:36.823679  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:36.824131  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:36.824158  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:36.824315  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHPort
	I0701 12:26:36.824557  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:36.824862  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHUsername
	I0701 12:26:36.825025  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m03/id_rsa Username:docker}
	I0701 12:26:36.909262  653531 ssh_runner.go:195] Run: cat /etc/os-release
	I0701 12:26:36.913798  653531 info.go:137] Remote host: Buildroot 2023.02.9
	I0701 12:26:36.913830  653531 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-630650/.minikube/addons for local assets ...
	I0701 12:26:36.913904  653531 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-630650/.minikube/files for local assets ...
	I0701 12:26:36.913973  653531 filesync.go:149] local asset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> 6378542.pem in /etc/ssl/certs
	I0701 12:26:36.913983  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> /etc/ssl/certs/6378542.pem
	I0701 12:26:36.914063  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0701 12:26:36.924147  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem --> /etc/ssl/certs/6378542.pem (1708 bytes)
	I0701 12:26:36.949103  653531 start.go:296] duration metric: took 128.830664ms for postStartSetup
	I0701 12:26:36.949169  653531 main.go:141] libmachine: (ha-735960-m03) Calling .DriverName
	I0701 12:26:36.949541  653531 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0701 12:26:36.949572  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHHostname
	I0701 12:26:36.952321  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:36.952670  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:36.952703  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:36.952895  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHPort
	I0701 12:26:36.953116  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:36.953299  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHUsername
	I0701 12:26:36.953494  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m03/id_rsa Username:docker}
	I0701 12:26:37.037086  653531 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0701 12:26:37.037223  653531 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0701 12:26:37.097170  653531 fix.go:56] duration metric: took 21.511363009s for fixHost
	I0701 12:26:37.097229  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHHostname
	I0701 12:26:37.100519  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:37.100936  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:37.100988  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:37.101235  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHPort
	I0701 12:26:37.101494  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:37.101681  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:37.101864  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHUsername
	I0701 12:26:37.102058  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:26:37.102248  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0701 12:26:37.102261  653531 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0701 12:26:37.210872  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719836797.190240924
	
	I0701 12:26:37.210897  653531 fix.go:216] guest clock: 1719836797.190240924
	I0701 12:26:37.210906  653531 fix.go:229] Guest: 2024-07-01 12:26:37.190240924 +0000 UTC Remote: 2024-07-01 12:26:37.09720405 +0000 UTC m=+154.567055715 (delta=93.036874ms)
	I0701 12:26:37.210928  653531 fix.go:200] guest clock delta is within tolerance: 93.036874ms
	I0701 12:26:37.210935  653531 start.go:83] releasing machines lock for "ha-735960-m03", held for 21.625157566s
	I0701 12:26:37.210966  653531 main.go:141] libmachine: (ha-735960-m03) Calling .DriverName
	I0701 12:26:37.211304  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetIP
	I0701 12:26:37.213807  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:37.214222  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:37.214255  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:37.216716  653531 out.go:177] * Found network options:
	I0701 12:26:37.218305  653531 out.go:177]   - NO_PROXY=192.168.39.16,192.168.39.86
	W0701 12:26:37.219816  653531 proxy.go:119] fail to check proxy env: Error ip not in block
	W0701 12:26:37.219845  653531 proxy.go:119] fail to check proxy env: Error ip not in block
	I0701 12:26:37.219865  653531 main.go:141] libmachine: (ha-735960-m03) Calling .DriverName
	I0701 12:26:37.220522  653531 main.go:141] libmachine: (ha-735960-m03) Calling .DriverName
	I0701 12:26:37.220737  653531 main.go:141] libmachine: (ha-735960-m03) Calling .DriverName
	I0701 12:26:37.220844  653531 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0701 12:26:37.220887  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHHostname
	W0701 12:26:37.220953  653531 proxy.go:119] fail to check proxy env: Error ip not in block
	W0701 12:26:37.220981  653531 proxy.go:119] fail to check proxy env: Error ip not in block
	I0701 12:26:37.221057  653531 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0701 12:26:37.221077  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHHostname
	I0701 12:26:37.223616  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:37.223976  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:37.224003  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:37.224022  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:37.224163  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHPort
	I0701 12:26:37.224349  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:37.224476  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:37.224495  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:37.224522  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHUsername
	I0701 12:26:37.224684  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHPort
	I0701 12:26:37.224708  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m03/id_rsa Username:docker}
	I0701 12:26:37.224822  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:37.224957  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHUsername
	I0701 12:26:37.225089  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m03/id_rsa Username:docker}
	W0701 12:26:37.324512  653531 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0701 12:26:37.324590  653531 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0701 12:26:37.342354  653531 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0701 12:26:37.342401  653531 start.go:494] detecting cgroup driver to use...
	I0701 12:26:37.342553  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 12:26:37.361964  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0701 12:26:37.372356  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0701 12:26:37.382741  653531 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0701 12:26:37.382800  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0701 12:26:37.393672  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0701 12:26:37.404182  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0701 12:26:37.413967  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0701 12:26:37.425102  653531 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0701 12:26:37.436486  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0701 12:26:37.448119  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0701 12:26:37.459499  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0701 12:26:37.470904  653531 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0701 12:26:37.480202  653531 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0701 12:26:37.489935  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:26:37.612275  653531 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0701 12:26:37.635575  653531 start.go:494] detecting cgroup driver to use...
	I0701 12:26:37.635692  653531 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0701 12:26:37.653571  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 12:26:37.670438  653531 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0701 12:26:37.688000  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 12:26:37.705115  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 12:26:37.718914  653531 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0701 12:26:37.744858  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 12:26:37.759980  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 12:26:37.779721  653531 ssh_runner.go:195] Run: which cri-dockerd
	I0701 12:26:37.783771  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0701 12:26:37.794141  653531 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0701 12:26:37.811510  653531 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0701 12:26:37.931976  653531 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0701 12:26:38.066164  653531 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0701 12:26:38.066230  653531 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0701 12:26:38.083572  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:26:38.206358  653531 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0701 12:26:40.648995  653531 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.442581628s)
	I0701 12:26:40.649094  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0701 12:26:40.663523  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0701 12:26:40.678231  653531 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0701 12:26:40.794839  653531 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0701 12:26:40.936707  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:26:41.068605  653531 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0701 12:26:41.086480  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0701 12:26:41.102238  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:26:41.225877  653531 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0701 12:26:41.309074  653531 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0701 12:26:41.309144  653531 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0701 12:26:41.314764  653531 start.go:562] Will wait 60s for crictl version
	I0701 12:26:41.314839  653531 ssh_runner.go:195] Run: which crictl
	I0701 12:26:41.318792  653531 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0701 12:26:41.356836  653531 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.1
	RuntimeApiVersion:  v1
	I0701 12:26:41.356927  653531 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0701 12:26:41.383790  653531 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0701 12:26:41.409143  653531 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.1 ...
	I0701 12:26:41.410603  653531 out.go:177]   - env NO_PROXY=192.168.39.16
	I0701 12:26:41.412215  653531 out.go:177]   - env NO_PROXY=192.168.39.16,192.168.39.86
	I0701 12:26:41.413404  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetIP
	I0701 12:26:41.416274  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:41.416763  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:41.416796  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:41.417070  653531 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0701 12:26:41.421392  653531 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 12:26:41.434549  653531 mustload.go:65] Loading cluster: ha-735960
	I0701 12:26:41.434797  653531 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:26:41.435079  653531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:26:41.435129  653531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:26:41.451156  653531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45677
	I0701 12:26:41.451676  653531 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:26:41.452212  653531 main.go:141] libmachine: Using API Version  1
	I0701 12:26:41.452237  653531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:26:41.452614  653531 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:26:41.452827  653531 main.go:141] libmachine: (ha-735960) Calling .GetState
	I0701 12:26:41.454575  653531 host.go:66] Checking if "ha-735960" exists ...
	I0701 12:26:41.454891  653531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:26:41.454938  653531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:26:41.471129  653531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33243
	I0701 12:26:41.471681  653531 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:26:41.472198  653531 main.go:141] libmachine: Using API Version  1
	I0701 12:26:41.472222  653531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:26:41.472612  653531 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:26:41.472844  653531 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:26:41.473032  653531 certs.go:68] Setting up /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960 for IP: 192.168.39.97
	I0701 12:26:41.473049  653531 certs.go:194] generating shared ca certs ...
	I0701 12:26:41.473074  653531 certs.go:226] acquiring lock for ca certs: {Name:mk34e166bfd069e523b2325e14d1812c523bff53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:26:41.473230  653531 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key
	I0701 12:26:41.473268  653531 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key
	I0701 12:26:41.473278  653531 certs.go:256] generating profile certs ...
	I0701 12:26:41.473349  653531 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/client.key
	I0701 12:26:41.473405  653531 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key.f1482ab5
	I0701 12:26:41.473453  653531 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.key
	I0701 12:26:41.473465  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0701 12:26:41.473478  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0701 12:26:41.473490  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0701 12:26:41.473503  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0701 12:26:41.473514  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0701 12:26:41.473528  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0701 12:26:41.473537  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0701 12:26:41.473548  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0701 12:26:41.473603  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem (1338 bytes)
	W0701 12:26:41.473630  653531 certs.go:480] ignoring /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854_empty.pem, impossibly tiny 0 bytes
	I0701 12:26:41.473639  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem (1675 bytes)
	I0701 12:26:41.473659  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem (1078 bytes)
	I0701 12:26:41.473680  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem (1123 bytes)
	I0701 12:26:41.473702  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem (1675 bytes)
	I0701 12:26:41.473736  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem (1708 bytes)
	I0701 12:26:41.473759  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem -> /usr/share/ca-certificates/637854.pem
	I0701 12:26:41.473772  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> /usr/share/ca-certificates/6378542.pem
	I0701 12:26:41.473784  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:26:41.494518  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:26:41.498371  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:26:41.498974  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:26:41.499011  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:26:41.499158  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:26:41.499416  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:26:41.499610  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:26:41.499835  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa Username:docker}
	I0701 12:26:41.570757  653531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0701 12:26:41.575932  653531 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0701 12:26:41.587511  653531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0701 12:26:41.591633  653531 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0701 12:26:41.604961  653531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0701 12:26:41.609152  653531 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0701 12:26:41.619653  653531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0701 12:26:41.623572  653531 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0701 12:26:41.634171  653531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0701 12:26:41.638176  653531 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0701 12:26:41.654120  653531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0701 12:26:41.659095  653531 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0701 12:26:41.671865  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0701 12:26:41.701740  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0701 12:26:41.726445  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0701 12:26:41.751925  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0701 12:26:41.776782  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0701 12:26:41.801611  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0701 12:26:41.825786  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0701 12:26:41.849992  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0701 12:26:41.873760  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem --> /usr/share/ca-certificates/637854.pem (1338 bytes)
	I0701 12:26:41.898685  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem --> /usr/share/ca-certificates/6378542.pem (1708 bytes)
	I0701 12:26:41.923397  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0701 12:26:41.948251  653531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0701 12:26:41.965919  653531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0701 12:26:41.982966  653531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0701 12:26:42.001626  653531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0701 12:26:42.019386  653531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0701 12:26:42.036382  653531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0701 12:26:42.053238  653531 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0701 12:26:42.070881  653531 ssh_runner.go:195] Run: openssl version
	I0701 12:26:42.076651  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/637854.pem && ln -fs /usr/share/ca-certificates/637854.pem /etc/ssl/certs/637854.pem"
	I0701 12:26:42.087389  653531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/637854.pem
	I0701 12:26:42.093055  653531 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  1 12:11 /usr/share/ca-certificates/637854.pem
	I0701 12:26:42.093154  653531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/637854.pem
	I0701 12:26:42.099823  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/637854.pem /etc/ssl/certs/51391683.0"
	I0701 12:26:42.111701  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6378542.pem && ln -fs /usr/share/ca-certificates/6378542.pem /etc/ssl/certs/6378542.pem"
	I0701 12:26:42.125593  653531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6378542.pem
	I0701 12:26:42.130163  653531 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  1 12:11 /usr/share/ca-certificates/6378542.pem
	I0701 12:26:42.130246  653531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6378542.pem
	I0701 12:26:42.136102  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6378542.pem /etc/ssl/certs/3ec20f2e.0"
	I0701 12:26:42.147064  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0701 12:26:42.159086  653531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:26:42.163767  653531 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  1 12:05 /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:26:42.163864  653531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:26:42.170462  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0701 12:26:42.181119  653531 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0701 12:26:42.185711  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0701 12:26:42.191736  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0701 12:26:42.198232  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0701 12:26:42.204698  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0701 12:26:42.210909  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0701 12:26:42.216837  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0701 12:26:42.222755  653531 kubeadm.go:928] updating node {m03 192.168.39.97 8443 v1.30.2 docker true true} ...
	I0701 12:26:42.222878  653531 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-735960-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.97
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-735960 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0701 12:26:42.222906  653531 kube-vip.go:115] generating kube-vip config ...
	I0701 12:26:42.222955  653531 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0701 12:26:42.237298  653531 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0701 12:26:42.237376  653531 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0701 12:26:42.237455  653531 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0701 12:26:42.247439  653531 binaries.go:44] Found k8s binaries, skipping transfer
	I0701 12:26:42.247515  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0701 12:26:42.257290  653531 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0701 12:26:42.274152  653531 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0701 12:26:42.290241  653531 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0701 12:26:42.308095  653531 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0701 12:26:42.312034  653531 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 12:26:42.325214  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:26:42.447612  653531 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0701 12:26:42.465983  653531 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:26:42.466298  653531 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:26:42.468248  653531 out.go:177] * Verifying Kubernetes components...
	I0701 12:26:42.469706  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:26:42.625060  653531 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0701 12:26:42.647149  653531 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19166-630650/kubeconfig
	I0701 12:26:42.647532  653531 kapi.go:59] client config for ha-735960: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/client.crt", KeyFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/client.key", CAFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfbb80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0701 12:26:42.647632  653531 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.16:8443
	I0701 12:26:42.647948  653531 node_ready.go:35] waiting up to 6m0s for node "ha-735960-m03" to be "Ready" ...
	I0701 12:26:42.648043  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:42.648055  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:42.648066  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:42.648079  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:42.652553  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:26:43.148887  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:43.148913  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:43.148924  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:43.148931  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:43.152504  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:43.153020  653531 node_ready.go:49] node "ha-735960-m03" has status "Ready":"True"
	I0701 12:26:43.153041  653531 node_ready.go:38] duration metric: took 505.070913ms for node "ha-735960-m03" to be "Ready" ...
	I0701 12:26:43.153051  653531 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 12:26:43.153132  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods
	I0701 12:26:43.153144  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:43.153154  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:43.153161  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:43.159789  653531 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0701 12:26:43.167076  653531 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:43.167158  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:26:43.167167  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:43.167175  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:43.167179  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:43.169757  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:43.170310  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:26:43.170347  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:43.170357  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:43.170362  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:43.173097  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:43.173879  653531 pod_ready.go:92] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"True"
	I0701 12:26:43.173897  653531 pod_ready.go:81] duration metric: took 6.79477ms for pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:43.173905  653531 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-p4rtz" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:43.173970  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-p4rtz
	I0701 12:26:43.173977  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:43.173984  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:43.173987  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:43.176719  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:43.177389  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:26:43.177403  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:43.177410  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:43.177415  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:43.180272  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:43.180876  653531 pod_ready.go:92] pod "coredns-7db6d8ff4d-p4rtz" in "kube-system" namespace has status "Ready":"True"
	I0701 12:26:43.180892  653531 pod_ready.go:81] duration metric: took 6.981686ms for pod "coredns-7db6d8ff4d-p4rtz" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:43.180901  653531 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:43.180946  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960
	I0701 12:26:43.180953  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:43.180959  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:43.180963  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:43.183979  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:43.184715  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:26:43.184733  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:43.184744  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:43.184750  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:43.187303  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:43.187727  653531 pod_ready.go:92] pod "etcd-ha-735960" in "kube-system" namespace has status "Ready":"True"
	I0701 12:26:43.187743  653531 pod_ready.go:81] duration metric: took 6.837753ms for pod "etcd-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:43.187751  653531 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:43.187803  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m02
	I0701 12:26:43.187810  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:43.187816  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:43.187820  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:43.190206  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:43.190728  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:26:43.190744  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:43.190753  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:43.190761  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:43.193433  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:43.194190  653531 pod_ready.go:92] pod "etcd-ha-735960-m02" in "kube-system" namespace has status "Ready":"True"
	I0701 12:26:43.194207  653531 pod_ready.go:81] duration metric: took 6.448739ms for pod "etcd-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:43.194216  653531 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:43.349638  653531 request.go:629] Waited for 155.349ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:43.349754  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:43.349767  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:43.349778  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:43.349790  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:43.354862  653531 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0701 12:26:43.548911  653531 request.go:629] Waited for 193.270032ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:43.548983  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:43.549014  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:43.549029  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:43.549034  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:43.554047  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:26:43.749322  653531 request.go:629] Waited for 54.224497ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:43.749397  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:43.749405  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:43.749423  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:43.749433  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:43.753610  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:26:43.949318  653531 request.go:629] Waited for 194.40537ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:43.949442  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:43.949455  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:43.949466  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:43.949475  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:43.953476  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:44.195013  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:44.195041  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:44.195053  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:44.195058  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:44.198623  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:44.349775  653531 request.go:629] Waited for 150.337133ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:44.349881  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:44.349890  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:44.349901  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:44.349909  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:44.354832  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:26:44.694539  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:44.694560  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:44.694569  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:44.694573  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:44.698072  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:44.749262  653531 request.go:629] Waited for 50.212385ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:44.749342  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:44.749357  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:44.749376  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:44.749400  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:44.759594  653531 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0701 12:26:45.194608  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:45.194639  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:45.194651  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:45.194656  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:45.198135  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:45.199157  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:45.199178  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:45.199187  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:45.199193  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:45.201747  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:45.202475  653531 pod_ready.go:102] pod "etcd-ha-735960-m03" in "kube-system" namespace has status "Ready":"False"
	I0701 12:26:45.695358  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:45.695387  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:45.695398  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:45.695405  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:45.698583  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:45.699570  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:45.699591  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:45.699603  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:45.699611  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:45.702299  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:46.195334  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:46.195357  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:46.195366  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:46.195369  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:46.199158  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:46.200116  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:46.200134  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:46.200146  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:46.200153  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:46.203740  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:46.695210  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:46.695238  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:46.695250  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:46.695257  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:46.698972  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:46.699688  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:46.699709  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:46.699722  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:46.699728  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:46.703576  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:47.194463  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:47.194494  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:47.194504  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:47.194512  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:47.197423  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:47.198125  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:47.198144  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:47.198156  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:47.198166  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:47.201172  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:47.695417  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:47.695446  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:47.695457  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:47.695463  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:47.698528  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:47.699400  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:47.699424  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:47.699435  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:47.699440  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:47.702619  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:47.703202  653531 pod_ready.go:102] pod "etcd-ha-735960-m03" in "kube-system" namespace has status "Ready":"False"
	I0701 12:26:48.194609  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:48.194632  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:48.194640  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:48.194656  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:48.197877  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:48.198784  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:48.198804  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:48.198815  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:48.198819  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:48.201611  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:48.694433  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:48.694459  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:48.694471  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:48.694478  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:48.697539  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:48.698170  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:48.698185  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:48.698193  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:48.698196  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:48.700886  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:49.194905  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:49.194931  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:49.194942  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:49.194954  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:49.199572  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:26:49.200541  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:49.200560  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:49.200570  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:49.200575  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:49.204090  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:49.694531  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:49.694551  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:49.694559  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:49.694563  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:49.698105  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:49.699044  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:49.699062  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:49.699073  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:49.699078  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:49.701617  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:50.195294  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:50.195322  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:50.195333  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:50.195338  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:50.198820  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:50.199561  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:50.199579  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:50.199588  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:50.199594  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:50.202455  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:50.203029  653531 pod_ready.go:102] pod "etcd-ha-735960-m03" in "kube-system" namespace has status "Ready":"False"
	I0701 12:26:50.694678  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:50.694700  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:50.694708  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:50.694712  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:50.697694  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:50.698383  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:50.698401  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:50.698409  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:50.698413  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:50.701398  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:51.195484  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:51.195522  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:51.195535  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:51.195539  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:51.199113  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:51.199788  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:51.199804  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:51.199811  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:51.199815  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:51.202679  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:51.695276  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:51.695304  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:51.695318  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:51.695325  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:51.698725  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:51.699425  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:51.699444  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:51.699454  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:51.699461  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:51.702960  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:52.195136  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:52.195168  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:52.195178  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:52.195182  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:52.198421  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:52.199068  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:52.199081  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:52.199089  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:52.199133  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:52.201737  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:52.695128  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:52.695153  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:52.695161  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:52.695165  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:52.698791  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:52.699625  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:52.699640  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:52.699647  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:52.699666  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:52.702284  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:52.702827  653531 pod_ready.go:102] pod "etcd-ha-735960-m03" in "kube-system" namespace has status "Ready":"False"
	I0701 12:26:53.194518  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:53.194542  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:53.194550  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:53.194555  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:53.197969  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:53.198583  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:53.198602  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:53.198610  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:53.198615  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:53.201376  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:53.695296  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:53.695318  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:53.695326  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:53.695331  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:53.699078  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:53.699884  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:53.699910  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:53.699922  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:53.699929  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:53.703186  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:54.195014  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:54.195043  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:54.195054  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:54.195058  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:54.199057  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:54.199733  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:54.199750  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:54.199758  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:54.199763  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:54.202961  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:54.695177  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:54.695212  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:54.695225  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:54.695233  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:54.698371  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:54.699201  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:54.699216  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:54.699224  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:54.699227  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:54.702002  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:55.194543  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:55.194566  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:55.194574  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:55.194579  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:55.198201  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:55.198814  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:55.198832  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:55.198839  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:55.198843  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:55.201469  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:55.201993  653531 pod_ready.go:102] pod "etcd-ha-735960-m03" in "kube-system" namespace has status "Ready":"False"
	I0701 12:26:55.694950  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:55.694972  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:55.694983  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:55.694990  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:55.698498  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:55.699087  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:55.699101  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:55.699108  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:55.699112  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:55.701817  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:56.194521  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:56.194544  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:56.194552  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:56.194557  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:56.197837  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:56.198482  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:56.198499  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:56.198505  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:56.198509  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:56.201147  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:56.201653  653531 pod_ready.go:92] pod "etcd-ha-735960-m03" in "kube-system" namespace has status "Ready":"True"
	I0701 12:26:56.201674  653531 pod_ready.go:81] duration metric: took 13.007452083s for pod "etcd-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:56.201692  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:56.201750  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960
	I0701 12:26:56.201757  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:56.201764  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:56.201770  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:56.204418  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:56.205132  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:26:56.205148  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:56.205154  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:56.205158  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:56.207485  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:56.207887  653531 pod_ready.go:92] pod "kube-apiserver-ha-735960" in "kube-system" namespace has status "Ready":"True"
	I0701 12:26:56.207907  653531 pod_ready.go:81] duration metric: took 6.206212ms for pod "kube-apiserver-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:56.207916  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:56.207971  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m02
	I0701 12:26:56.207981  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:56.207988  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:56.207992  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:56.210274  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:56.210769  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:26:56.210784  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:56.210791  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:56.210795  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:56.213307  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:56.213730  653531 pod_ready.go:92] pod "kube-apiserver-ha-735960-m02" in "kube-system" namespace has status "Ready":"True"
	I0701 12:26:56.213745  653531 pod_ready.go:81] duration metric: took 5.823695ms for pod "kube-apiserver-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:56.213752  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:56.213799  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:26:56.213806  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:56.213813  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:56.213817  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:56.221893  653531 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0701 12:26:56.222630  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:56.222650  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:56.222661  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:56.222665  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:56.225298  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:56.714434  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:26:56.714457  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:56.714466  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:56.714473  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:56.717715  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:56.718387  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:56.718404  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:56.718414  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:56.718420  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:56.721172  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:57.213955  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:26:57.213979  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:57.213987  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:57.213992  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:57.217394  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:57.218050  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:57.218071  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:57.218082  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:57.218088  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:57.221478  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:57.714757  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:26:57.714779  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:57.714787  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:57.714792  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:57.717911  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:57.718695  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:57.718720  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:57.718734  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:57.718740  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:57.721551  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:58.214582  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:26:58.214605  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:58.214613  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:58.214616  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:58.218396  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:58.219147  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:58.219167  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:58.219174  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:58.219178  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:58.221830  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:58.222386  653531 pod_ready.go:102] pod "kube-apiserver-ha-735960-m03" in "kube-system" namespace has status "Ready":"False"
	I0701 12:26:58.714864  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:26:58.714890  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:58.714901  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:58.714906  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:58.718181  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:58.718855  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:58.718874  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:58.718881  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:58.718885  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:58.722484  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:59.214439  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:26:59.214472  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:59.214484  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:59.214491  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:59.217758  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:59.218712  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:59.218732  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:59.218738  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:59.218742  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:59.221527  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:59.713995  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:26:59.714020  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:59.714028  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:59.714033  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:59.717121  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:59.717838  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:59.717855  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:59.717862  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:59.717866  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:59.720568  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:27:00.214542  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:27:00.214568  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:00.214578  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:00.214583  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:00.218220  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:00.218919  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:00.218938  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:00.218947  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:00.218954  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:00.222119  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:00.223039  653531 pod_ready.go:102] pod "kube-apiserver-ha-735960-m03" in "kube-system" namespace has status "Ready":"False"
	I0701 12:27:00.714993  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:27:00.715015  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:00.715023  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:00.715027  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:00.718022  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:27:00.718871  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:00.718894  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:00.718905  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:00.718910  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:00.721660  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:27:01.214293  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:27:01.214320  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:01.214345  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:01.214354  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:01.217660  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:01.218619  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:01.218636  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:01.218645  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:01.218649  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:01.221248  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:27:01.714569  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:27:01.714593  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:01.714602  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:01.714607  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:01.717986  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:01.718877  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:01.718900  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:01.718912  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:01.718917  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:01.722103  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:02.213928  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:27:02.213953  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:02.213961  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:02.213965  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:02.217318  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:02.218078  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:02.218093  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:02.218099  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:02.218102  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:02.221493  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:02.714825  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:27:02.714849  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:02.714857  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:02.714862  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:02.718359  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:02.719162  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:02.719180  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:02.719188  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:02.719193  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:02.722363  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:02.723005  653531 pod_ready.go:92] pod "kube-apiserver-ha-735960-m03" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:02.723029  653531 pod_ready.go:81] duration metric: took 6.509269845s for pod "kube-apiserver-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:02.723044  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:02.723152  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-735960
	I0701 12:27:02.723163  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:02.723174  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:02.723186  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:02.726502  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:02.727250  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:27:02.727266  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:02.727277  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:02.727280  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:02.730522  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:02.731090  653531 pod_ready.go:92] pod "kube-controller-manager-ha-735960" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:02.731116  653531 pod_ready.go:81] duration metric: took 8.062099ms for pod "kube-controller-manager-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:02.731129  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:02.731206  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-735960-m02
	I0701 12:27:02.731216  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:02.731226  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:02.731232  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:02.734354  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:02.735350  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:27:02.735370  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:02.735378  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:02.735381  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:02.738250  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:27:02.739014  653531 pod_ready.go:92] pod "kube-controller-manager-ha-735960-m02" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:02.739035  653531 pod_ready.go:81] duration metric: took 7.898052ms for pod "kube-controller-manager-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:02.739045  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:02.739108  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-735960-m03
	I0701 12:27:02.739116  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:02.739125  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:02.739134  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:02.742376  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:02.743084  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:02.743106  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:02.743117  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:02.743121  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:02.746455  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:02.747046  653531 pod_ready.go:92] pod "kube-controller-manager-ha-735960-m03" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:02.747075  653531 pod_ready.go:81] duration metric: took 8.017741ms for pod "kube-controller-manager-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:02.747091  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-25ssf" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:02.747213  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-25ssf
	I0701 12:27:02.747226  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:02.747237  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:02.747242  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:02.750009  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:27:02.750887  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m04
	I0701 12:27:02.750910  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:02.750941  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:02.750947  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:02.753841  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:27:02.754410  653531 pod_ready.go:97] node "ha-735960-m04" hosting pod "kube-proxy-25ssf" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-735960-m04" has status "Ready":"Unknown"
	I0701 12:27:02.754439  653531 pod_ready.go:81] duration metric: took 7.336267ms for pod "kube-proxy-25ssf" in "kube-system" namespace to be "Ready" ...
	E0701 12:27:02.754453  653531 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-735960-m04" hosting pod "kube-proxy-25ssf" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-735960-m04" has status "Ready":"Unknown"
	I0701 12:27:02.754464  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-776rt" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:02.915931  653531 request.go:629] Waited for 161.334912ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-776rt
	I0701 12:27:02.916009  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-776rt
	I0701 12:27:02.916016  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:02.916026  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:02.916032  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:02.922578  653531 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0701 12:27:03.115563  653531 request.go:629] Waited for 192.243271ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:03.115665  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:03.115679  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:03.115693  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:03.115702  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:03.119673  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:03.120379  653531 pod_ready.go:92] pod "kube-proxy-776rt" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:03.120399  653531 pod_ready.go:81] duration metric: took 365.926734ms for pod "kube-proxy-776rt" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:03.120409  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b6knb" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:03.315515  653531 request.go:629] Waited for 195.003147ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b6knb
	I0701 12:27:03.315575  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b6knb
	I0701 12:27:03.315580  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:03.315588  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:03.315593  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:03.319367  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:03.515329  653531 request.go:629] Waited for 195.408895ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:27:03.515421  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:27:03.515429  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:03.515440  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:03.515452  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:03.518825  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:03.519611  653531 pod_ready.go:92] pod "kube-proxy-b6knb" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:03.519633  653531 pod_ready.go:81] duration metric: took 399.213433ms for pod "kube-proxy-b6knb" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:03.519642  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lphzn" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:03.715721  653531 request.go:629] Waited for 195.977677ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lphzn
	I0701 12:27:03.715811  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lphzn
	I0701 12:27:03.715820  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:03.715828  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:03.715833  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:03.720058  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:27:03.915338  653531 request.go:629] Waited for 194.486914ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:27:03.915438  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:27:03.915447  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:03.915455  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:03.915462  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:03.919143  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:03.919765  653531 pod_ready.go:92] pod "kube-proxy-lphzn" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:03.919789  653531 pod_ready.go:81] duration metric: took 400.14123ms for pod "kube-proxy-lphzn" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:03.919800  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:04.114907  653531 request.go:629] Waited for 195.032639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960
	I0701 12:27:04.114983  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960
	I0701 12:27:04.115004  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:04.115019  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:04.115027  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:04.119283  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:27:04.315128  653531 request.go:629] Waited for 195.065236ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:27:04.315231  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:27:04.315243  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:04.315255  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:04.315264  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:04.319107  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:04.319792  653531 pod_ready.go:92] pod "kube-scheduler-ha-735960" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:04.319821  653531 pod_ready.go:81] duration metric: took 400.011957ms for pod "kube-scheduler-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:04.319838  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:04.515786  653531 request.go:629] Waited for 195.848501ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960-m02
	I0701 12:27:04.515865  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960-m02
	I0701 12:27:04.515872  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:04.515885  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:04.515894  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:04.519607  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:04.715555  653531 request.go:629] Waited for 195.254305ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:27:04.715662  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:27:04.715673  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:04.715686  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:04.715696  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:04.718989  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:04.719533  653531 pod_ready.go:92] pod "kube-scheduler-ha-735960-m02" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:04.719555  653531 pod_ready.go:81] duration metric: took 399.709368ms for pod "kube-scheduler-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:04.719565  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:04.915742  653531 request.go:629] Waited for 196.076319ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960-m03
	I0701 12:27:04.915873  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960-m03
	I0701 12:27:04.915884  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:04.915892  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:04.915896  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:04.919910  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:05.114903  653531 request.go:629] Waited for 194.321141ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:05.114998  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:05.115010  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:05.115020  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:05.115029  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:05.118835  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:05.119325  653531 pod_ready.go:92] pod "kube-scheduler-ha-735960-m03" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:05.119348  653531 pod_ready.go:81] duration metric: took 399.776156ms for pod "kube-scheduler-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:05.119360  653531 pod_ready.go:38] duration metric: took 21.966297492s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 12:27:05.119380  653531 api_server.go:52] waiting for apiserver process to appear ...
	I0701 12:27:05.119446  653531 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 12:27:05.134970  653531 api_server.go:72] duration metric: took 22.668924734s to wait for apiserver process to appear ...
	I0701 12:27:05.135005  653531 api_server.go:88] waiting for apiserver healthz status ...
	I0701 12:27:05.135037  653531 api_server.go:253] Checking apiserver healthz at https://192.168.39.16:8443/healthz ...
	I0701 12:27:05.139924  653531 api_server.go:279] https://192.168.39.16:8443/healthz returned 200:
	ok
	I0701 12:27:05.140029  653531 round_trippers.go:463] GET https://192.168.39.16:8443/version
	I0701 12:27:05.140040  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:05.140052  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:05.140060  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:05.141045  653531 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0701 12:27:05.141124  653531 api_server.go:141] control plane version: v1.30.2
	I0701 12:27:05.141142  653531 api_server.go:131] duration metric: took 6.129152ms to wait for apiserver health ...
	I0701 12:27:05.141156  653531 system_pods.go:43] waiting for kube-system pods to appear ...
	I0701 12:27:05.315496  653531 request.go:629] Waited for 174.257848ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods
	I0701 12:27:05.315603  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods
	I0701 12:27:05.315615  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:05.315627  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:05.315640  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:05.331176  653531 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0701 12:27:05.341126  653531 system_pods.go:59] 26 kube-system pods found
	I0701 12:27:05.341168  653531 system_pods.go:61] "coredns-7db6d8ff4d-nk4lf" [c03dd635-a82d-4f18-bd72-ec575f91867e] Running
	I0701 12:27:05.341173  653531 system_pods.go:61] "coredns-7db6d8ff4d-p4rtz" [267efba7-bf34-48d5-ab15-5bda45ff2f4f] Running
	I0701 12:27:05.341177  653531 system_pods.go:61] "etcd-ha-735960" [4b98745c-292f-42b5-977c-69c50fd241f1] Running
	I0701 12:27:05.341181  653531 system_pods.go:61] "etcd-ha-735960-m02" [fed8cdfa-8428-47e0-84ef-05297ad232f8] Running
	I0701 12:27:05.341184  653531 system_pods.go:61] "etcd-ha-735960-m03" [50b07bc3-ff6b-487d-8654-901d96892868] Running
	I0701 12:27:05.341187  653531 system_pods.go:61] "kindnet-2424m" [aa18d5dd-f6eb-4f04-a61e-b0b257e214af] Running
	I0701 12:27:05.341190  653531 system_pods.go:61] "kindnet-6gx8s" [7f46a773-a075-476c-9e54-89f125b4b57a] Running
	I0701 12:27:05.341195  653531 system_pods.go:61] "kindnet-7f6hm" [a8c302b4-1163-4d4f-bfe3-4fd3b5d23cf0] Running
	I0701 12:27:05.341199  653531 system_pods.go:61] "kindnet-bztzv" [7afa0e45-3d10-40bc-b422-7005a3ca9d3a] Running
	I0701 12:27:05.341203  653531 system_pods.go:61] "kube-apiserver-ha-735960" [ad041aaa-465a-4d8a-a8dc-b7665e1d587d] Running
	I0701 12:27:05.341208  653531 system_pods.go:61] "kube-apiserver-ha-735960-m02" [ba28f48e-1c18-47e3-ab11-a9b5588c5c32] Running
	I0701 12:27:05.341213  653531 system_pods.go:61] "kube-apiserver-ha-735960-m03" [baafa3bf-78ee-4269-9591-b0440927e055] Running
	I0701 12:27:05.341218  653531 system_pods.go:61] "kube-controller-manager-ha-735960" [3f0f0cf5-329d-47bc-b922-7583902e2607] Running
	I0701 12:27:05.341222  653531 system_pods.go:61] "kube-controller-manager-ha-735960-m02" [258fde18-ac5c-4446-842b-9465529b154c] Running
	I0701 12:27:05.341231  653531 system_pods.go:61] "kube-controller-manager-ha-735960-m03" [79acc56b-a9e0-4d4b-bc64-1a3a36ddf051] Running
	I0701 12:27:05.341235  653531 system_pods.go:61] "kube-proxy-25ssf" [11f0dc23-ab9d-4d39-988d-4c44dfde86cd] Running
	I0701 12:27:05.341244  653531 system_pods.go:61] "kube-proxy-776rt" [5666dac7-924e-4429-bd1d-a1a5647cc611] Running
	I0701 12:27:05.341248  653531 system_pods.go:61] "kube-proxy-b6knb" [eb36e930-5799-4ff7-821a-ccb22303cd1b] Running
	I0701 12:27:05.341253  653531 system_pods.go:61] "kube-proxy-lphzn" [0761a7a6-740e-4cde-9ab5-e02e8d417907] Running
	I0701 12:27:05.341258  653531 system_pods.go:61] "kube-scheduler-ha-735960" [c624cf42-a7d6-4aaf-859d-1aeaf29f9acb] Running
	I0701 12:27:05.341266  653531 system_pods.go:61] "kube-scheduler-ha-735960-m02" [7de78af7-2d79-46dc-bd34-f221d79fde06] Running
	I0701 12:27:05.341276  653531 system_pods.go:61] "kube-scheduler-ha-735960-m03" [9f9a2030-9332-44af-b8dc-3b4609e53f91] Running
	I0701 12:27:05.341284  653531 system_pods.go:61] "kube-vip-ha-735960" [4299679a-c145-4f4f-8ec6-3cd468b98ef1] Running
	I0701 12:27:05.341289  653531 system_pods.go:61] "kube-vip-ha-735960-m02" [1c9b13e1-515c-43c0-8d99-5ad1c1807727] Running
	I0701 12:27:05.341296  653531 system_pods.go:61] "kube-vip-ha-735960-m03" [7069ea7c-5461-4fe6-a969-97fe33396ebb] Running
	I0701 12:27:05.341300  653531 system_pods.go:61] "storage-provisioner" [f5c4f7f9-d648-4019-a5ea-6ce59f6c5663] Running
	I0701 12:27:05.341308  653531 system_pods.go:74] duration metric: took 200.142768ms to wait for pod list to return data ...
	I0701 12:27:05.341319  653531 default_sa.go:34] waiting for default service account to be created ...
	I0701 12:27:05.515805  653531 request.go:629] Waited for 174.38988ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/default/serviceaccounts
	I0701 12:27:05.515869  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/default/serviceaccounts
	I0701 12:27:05.515874  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:05.515882  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:05.515886  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:05.519545  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:05.519680  653531 default_sa.go:45] found service account: "default"
	I0701 12:27:05.519701  653531 default_sa.go:55] duration metric: took 178.373792ms for default service account to be created ...
	I0701 12:27:05.519712  653531 system_pods.go:116] waiting for k8s-apps to be running ...
	I0701 12:27:05.715337  653531 request.go:629] Waited for 195.548539ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods
	I0701 12:27:05.715405  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods
	I0701 12:27:05.715411  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:05.715423  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:05.715431  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:05.722571  653531 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0701 12:27:05.729587  653531 system_pods.go:86] 26 kube-system pods found
	I0701 12:27:05.729628  653531 system_pods.go:89] "coredns-7db6d8ff4d-nk4lf" [c03dd635-a82d-4f18-bd72-ec575f91867e] Running
	I0701 12:27:05.729636  653531 system_pods.go:89] "coredns-7db6d8ff4d-p4rtz" [267efba7-bf34-48d5-ab15-5bda45ff2f4f] Running
	I0701 12:27:05.729642  653531 system_pods.go:89] "etcd-ha-735960" [4b98745c-292f-42b5-977c-69c50fd241f1] Running
	I0701 12:27:05.729649  653531 system_pods.go:89] "etcd-ha-735960-m02" [fed8cdfa-8428-47e0-84ef-05297ad232f8] Running
	I0701 12:27:05.729655  653531 system_pods.go:89] "etcd-ha-735960-m03" [50b07bc3-ff6b-487d-8654-901d96892868] Running
	I0701 12:27:05.729661  653531 system_pods.go:89] "kindnet-2424m" [aa18d5dd-f6eb-4f04-a61e-b0b257e214af] Running
	I0701 12:27:05.729666  653531 system_pods.go:89] "kindnet-6gx8s" [7f46a773-a075-476c-9e54-89f125b4b57a] Running
	I0701 12:27:05.729671  653531 system_pods.go:89] "kindnet-7f6hm" [a8c302b4-1163-4d4f-bfe3-4fd3b5d23cf0] Running
	I0701 12:27:05.729677  653531 system_pods.go:89] "kindnet-bztzv" [7afa0e45-3d10-40bc-b422-7005a3ca9d3a] Running
	I0701 12:27:05.729684  653531 system_pods.go:89] "kube-apiserver-ha-735960" [ad041aaa-465a-4d8a-a8dc-b7665e1d587d] Running
	I0701 12:27:05.729689  653531 system_pods.go:89] "kube-apiserver-ha-735960-m02" [ba28f48e-1c18-47e3-ab11-a9b5588c5c32] Running
	I0701 12:27:05.729695  653531 system_pods.go:89] "kube-apiserver-ha-735960-m03" [baafa3bf-78ee-4269-9591-b0440927e055] Running
	I0701 12:27:05.729702  653531 system_pods.go:89] "kube-controller-manager-ha-735960" [3f0f0cf5-329d-47bc-b922-7583902e2607] Running
	I0701 12:27:05.729710  653531 system_pods.go:89] "kube-controller-manager-ha-735960-m02" [258fde18-ac5c-4446-842b-9465529b154c] Running
	I0701 12:27:05.729720  653531 system_pods.go:89] "kube-controller-manager-ha-735960-m03" [79acc56b-a9e0-4d4b-bc64-1a3a36ddf051] Running
	I0701 12:27:05.729729  653531 system_pods.go:89] "kube-proxy-25ssf" [11f0dc23-ab9d-4d39-988d-4c44dfde86cd] Running
	I0701 12:27:05.729737  653531 system_pods.go:89] "kube-proxy-776rt" [5666dac7-924e-4429-bd1d-a1a5647cc611] Running
	I0701 12:27:05.729745  653531 system_pods.go:89] "kube-proxy-b6knb" [eb36e930-5799-4ff7-821a-ccb22303cd1b] Running
	I0701 12:27:05.729755  653531 system_pods.go:89] "kube-proxy-lphzn" [0761a7a6-740e-4cde-9ab5-e02e8d417907] Running
	I0701 12:27:05.729764  653531 system_pods.go:89] "kube-scheduler-ha-735960" [c624cf42-a7d6-4aaf-859d-1aeaf29f9acb] Running
	I0701 12:27:05.729770  653531 system_pods.go:89] "kube-scheduler-ha-735960-m02" [7de78af7-2d79-46dc-bd34-f221d79fde06] Running
	I0701 12:27:05.729776  653531 system_pods.go:89] "kube-scheduler-ha-735960-m03" [9f9a2030-9332-44af-b8dc-3b4609e53f91] Running
	I0701 12:27:05.729783  653531 system_pods.go:89] "kube-vip-ha-735960" [4299679a-c145-4f4f-8ec6-3cd468b98ef1] Running
	I0701 12:27:05.729789  653531 system_pods.go:89] "kube-vip-ha-735960-m02" [1c9b13e1-515c-43c0-8d99-5ad1c1807727] Running
	I0701 12:27:05.729796  653531 system_pods.go:89] "kube-vip-ha-735960-m03" [7069ea7c-5461-4fe6-a969-97fe33396ebb] Running
	I0701 12:27:05.729802  653531 system_pods.go:89] "storage-provisioner" [f5c4f7f9-d648-4019-a5ea-6ce59f6c5663] Running
	I0701 12:27:05.729815  653531 system_pods.go:126] duration metric: took 210.095212ms to wait for k8s-apps to be running ...
	I0701 12:27:05.729829  653531 system_svc.go:44] waiting for kubelet service to be running ....
	I0701 12:27:05.729888  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 12:27:05.745646  653531 system_svc.go:56] duration metric: took 15.808828ms WaitForService to wait for kubelet
	I0701 12:27:05.745679  653531 kubeadm.go:576] duration metric: took 23.279640822s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 12:27:05.745702  653531 node_conditions.go:102] verifying NodePressure condition ...
	I0701 12:27:05.915161  653531 request.go:629] Waited for 169.354932ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes
	I0701 12:27:05.915221  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes
	I0701 12:27:05.915226  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:05.915234  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:05.915239  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:05.919105  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:05.920307  653531 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0701 12:27:05.920336  653531 node_conditions.go:123] node cpu capacity is 2
	I0701 12:27:05.920352  653531 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0701 12:27:05.920357  653531 node_conditions.go:123] node cpu capacity is 2
	I0701 12:27:05.920361  653531 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0701 12:27:05.920366  653531 node_conditions.go:123] node cpu capacity is 2
	I0701 12:27:05.920370  653531 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0701 12:27:05.920375  653531 node_conditions.go:123] node cpu capacity is 2
	I0701 12:27:05.920382  653531 node_conditions.go:105] duration metric: took 174.672945ms to run NodePressure ...
	I0701 12:27:05.920400  653531 start.go:240] waiting for startup goroutines ...
	I0701 12:27:05.920438  653531 start.go:254] writing updated cluster config ...
	I0701 12:27:05.922556  653531 out.go:177] 
	I0701 12:27:05.924320  653531 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:27:05.924444  653531 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
	I0701 12:27:05.926228  653531 out.go:177] * Starting "ha-735960-m04" worker node in "ha-735960" cluster
	I0701 12:27:05.927583  653531 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 12:27:05.927623  653531 cache.go:56] Caching tarball of preloaded images
	I0701 12:27:05.927740  653531 preload.go:173] Found /home/jenkins/minikube-integration/19166-630650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0701 12:27:05.927753  653531 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0701 12:27:05.927868  653531 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
	I0701 12:27:05.928081  653531 start.go:360] acquireMachinesLock for ha-735960-m04: {Name:mk43a6c0c0c15a237623df377ad65b82186c0ad8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:27:05.928138  653531 start.go:364] duration metric: took 34.293µs to acquireMachinesLock for "ha-735960-m04"
	I0701 12:27:05.928160  653531 start.go:96] Skipping create...Using existing machine configuration
	I0701 12:27:05.928170  653531 fix.go:54] fixHost starting: m04
	I0701 12:27:05.928452  653531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:27:05.928496  653531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:27:05.944734  653531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39337
	I0701 12:27:05.945306  653531 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:27:05.945856  653531 main.go:141] libmachine: Using API Version  1
	I0701 12:27:05.945878  653531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:27:05.946270  653531 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:27:05.946505  653531 main.go:141] libmachine: (ha-735960-m04) Calling .DriverName
	I0701 12:27:05.946718  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetState
	I0701 12:27:05.948900  653531 fix.go:112] recreateIfNeeded on ha-735960-m04: state=Stopped err=<nil>
	I0701 12:27:05.948936  653531 main.go:141] libmachine: (ha-735960-m04) Calling .DriverName
	W0701 12:27:05.949137  653531 fix.go:138] unexpected machine state, will restart: <nil>
	I0701 12:27:05.951007  653531 out.go:177] * Restarting existing kvm2 VM for "ha-735960-m04" ...
	I0701 12:27:05.952219  653531 main.go:141] libmachine: (ha-735960-m04) Calling .Start
	I0701 12:27:05.952428  653531 main.go:141] libmachine: (ha-735960-m04) Ensuring networks are active...
	I0701 12:27:05.953378  653531 main.go:141] libmachine: (ha-735960-m04) Ensuring network default is active
	I0701 12:27:05.953815  653531 main.go:141] libmachine: (ha-735960-m04) Ensuring network mk-ha-735960 is active
	I0701 12:27:05.954229  653531 main.go:141] libmachine: (ha-735960-m04) Getting domain xml...
	I0701 12:27:05.954857  653531 main.go:141] libmachine: (ha-735960-m04) Creating domain...
	I0701 12:27:07.274791  653531 main.go:141] libmachine: (ha-735960-m04) Waiting to get IP...
	I0701 12:27:07.275684  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:07.276224  653531 main.go:141] libmachine: (ha-735960-m04) DBG | unable to find current IP address of domain ha-735960-m04 in network mk-ha-735960
	I0701 12:27:07.276269  653531 main.go:141] libmachine: (ha-735960-m04) DBG | I0701 12:27:07.276176  654403 retry.go:31] will retry after 236.931472ms: waiting for machine to come up
	I0701 12:27:07.514910  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:07.515487  653531 main.go:141] libmachine: (ha-735960-m04) DBG | unable to find current IP address of domain ha-735960-m04 in network mk-ha-735960
	I0701 12:27:07.515520  653531 main.go:141] libmachine: (ha-735960-m04) DBG | I0701 12:27:07.515422  654403 retry.go:31] will retry after 376.766943ms: waiting for machine to come up
	I0701 12:27:07.894235  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:07.894716  653531 main.go:141] libmachine: (ha-735960-m04) DBG | unable to find current IP address of domain ha-735960-m04 in network mk-ha-735960
	I0701 12:27:07.894748  653531 main.go:141] libmachine: (ha-735960-m04) DBG | I0701 12:27:07.894658  654403 retry.go:31] will retry after 389.939732ms: waiting for machine to come up
	I0701 12:27:08.286528  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:08.287041  653531 main.go:141] libmachine: (ha-735960-m04) DBG | unable to find current IP address of domain ha-735960-m04 in network mk-ha-735960
	I0701 12:27:08.287066  653531 main.go:141] libmachine: (ha-735960-m04) DBG | I0701 12:27:08.286982  654403 retry.go:31] will retry after 542.184171ms: waiting for machine to come up
	I0701 12:27:08.831459  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:08.832024  653531 main.go:141] libmachine: (ha-735960-m04) DBG | unable to find current IP address of domain ha-735960-m04 in network mk-ha-735960
	I0701 12:27:08.832105  653531 main.go:141] libmachine: (ha-735960-m04) DBG | I0701 12:27:08.832069  654403 retry.go:31] will retry after 609.488369ms: waiting for machine to come up
	I0701 12:27:09.442798  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:09.443236  653531 main.go:141] libmachine: (ha-735960-m04) DBG | unable to find current IP address of domain ha-735960-m04 in network mk-ha-735960
	I0701 12:27:09.443272  653531 main.go:141] libmachine: (ha-735960-m04) DBG | I0701 12:27:09.443174  654403 retry.go:31] will retry after 777.604605ms: waiting for machine to come up
	I0701 12:27:10.221860  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:10.222317  653531 main.go:141] libmachine: (ha-735960-m04) DBG | unable to find current IP address of domain ha-735960-m04 in network mk-ha-735960
	I0701 12:27:10.222352  653531 main.go:141] libmachine: (ha-735960-m04) DBG | I0701 12:27:10.222242  654403 retry.go:31] will retry after 1.013463977s: waiting for machine to come up
	I0701 12:27:11.237171  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:11.237628  653531 main.go:141] libmachine: (ha-735960-m04) DBG | unable to find current IP address of domain ha-735960-m04 in network mk-ha-735960
	I0701 12:27:11.237658  653531 main.go:141] libmachine: (ha-735960-m04) DBG | I0701 12:27:11.237572  654403 retry.go:31] will retry after 1.368493369s: waiting for machine to come up
	I0701 12:27:12.607736  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:12.608308  653531 main.go:141] libmachine: (ha-735960-m04) DBG | unable to find current IP address of domain ha-735960-m04 in network mk-ha-735960
	I0701 12:27:12.608342  653531 main.go:141] libmachine: (ha-735960-m04) DBG | I0701 12:27:12.608254  654403 retry.go:31] will retry after 1.709127759s: waiting for machine to come up
	I0701 12:27:14.320033  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:14.320531  653531 main.go:141] libmachine: (ha-735960-m04) DBG | unable to find current IP address of domain ha-735960-m04 in network mk-ha-735960
	I0701 12:27:14.320565  653531 main.go:141] libmachine: (ha-735960-m04) DBG | I0701 12:27:14.320491  654403 retry.go:31] will retry after 2.145058749s: waiting for machine to come up
	I0701 12:27:16.466840  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:16.467246  653531 main.go:141] libmachine: (ha-735960-m04) DBG | unable to find current IP address of domain ha-735960-m04 in network mk-ha-735960
	I0701 12:27:16.467275  653531 main.go:141] libmachine: (ha-735960-m04) DBG | I0701 12:27:16.467196  654403 retry.go:31] will retry after 2.340416682s: waiting for machine to come up
	I0701 12:27:18.809756  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:18.810215  653531 main.go:141] libmachine: (ha-735960-m04) DBG | unable to find current IP address of domain ha-735960-m04 in network mk-ha-735960
	I0701 12:27:18.810245  653531 main.go:141] libmachine: (ha-735960-m04) DBG | I0701 12:27:18.810155  654403 retry.go:31] will retry after 2.893605535s: waiting for machine to come up
	I0701 12:27:21.705535  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:21.706011  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has current primary IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:21.706036  653531 main.go:141] libmachine: (ha-735960-m04) Found IP for machine: 192.168.39.60
	I0701 12:27:21.706050  653531 main.go:141] libmachine: (ha-735960-m04) Reserving static IP address...
	I0701 12:27:21.706638  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "ha-735960-m04", mac: "52:54:00:2d:8e:6d", ip: "192.168.39.60"} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:21.706671  653531 main.go:141] libmachine: (ha-735960-m04) Reserved static IP address: 192.168.39.60
	I0701 12:27:21.706689  653531 main.go:141] libmachine: (ha-735960-m04) DBG | skip adding static IP to network mk-ha-735960 - found existing host DHCP lease matching {name: "ha-735960-m04", mac: "52:54:00:2d:8e:6d", ip: "192.168.39.60"}
	I0701 12:27:21.706703  653531 main.go:141] libmachine: (ha-735960-m04) DBG | Getting to WaitForSSH function...
	I0701 12:27:21.706715  653531 main.go:141] libmachine: (ha-735960-m04) Waiting for SSH to be available...
	I0701 12:27:21.709236  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:21.709702  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:21.709729  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:21.709818  653531 main.go:141] libmachine: (ha-735960-m04) DBG | Using SSH client type: external
	I0701 12:27:21.709841  653531 main.go:141] libmachine: (ha-735960-m04) DBG | Using SSH private key: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m04/id_rsa (-rw-------)
	I0701 12:27:21.709870  653531 main.go:141] libmachine: (ha-735960-m04) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.60 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m04/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0701 12:27:21.709885  653531 main.go:141] libmachine: (ha-735960-m04) DBG | About to run SSH command:
	I0701 12:27:21.709897  653531 main.go:141] libmachine: (ha-735960-m04) DBG | exit 0
	I0701 12:27:21.838462  653531 main.go:141] libmachine: (ha-735960-m04) DBG | SSH cmd err, output: <nil>: 
	I0701 12:27:21.838803  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetConfigRaw
	I0701 12:27:21.839497  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetIP
	I0701 12:27:21.842255  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:21.842727  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:21.842764  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:21.843067  653531 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
	I0701 12:27:21.843309  653531 machine.go:94] provisionDockerMachine start ...
	I0701 12:27:21.843332  653531 main.go:141] libmachine: (ha-735960-m04) Calling .DriverName
	I0701 12:27:21.843625  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHHostname
	I0701 12:27:21.846158  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:21.846625  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:21.846658  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:21.846874  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHPort
	I0701 12:27:21.847122  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:21.847313  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:21.847496  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHUsername
	I0701 12:27:21.847763  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:27:21.847995  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0701 12:27:21.848012  653531 main.go:141] libmachine: About to run SSH command:
	hostname
	I0701 12:27:21.958527  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0701 12:27:21.958560  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetMachineName
	I0701 12:27:21.958896  653531 buildroot.go:166] provisioning hostname "ha-735960-m04"
	I0701 12:27:21.958928  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetMachineName
	I0701 12:27:21.959168  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHHostname
	I0701 12:27:21.961718  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:21.962176  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:21.962212  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:21.962410  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHPort
	I0701 12:27:21.962629  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:21.962804  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:21.962930  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHUsername
	I0701 12:27:21.963089  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:27:21.963293  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0701 12:27:21.963311  653531 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-735960-m04 && echo "ha-735960-m04" | sudo tee /etc/hostname
	I0701 12:27:22.089150  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-735960-m04
	
	I0701 12:27:22.089185  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHHostname
	I0701 12:27:22.092352  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:22.092805  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:22.092829  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:22.093059  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHPort
	I0701 12:27:22.093293  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:22.093532  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:22.093680  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHUsername
	I0701 12:27:22.093947  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:27:22.094124  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0701 12:27:22.094152  653531 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-735960-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-735960-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-735960-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0701 12:27:22.211873  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0701 12:27:22.211908  653531 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19166-630650/.minikube CaCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19166-630650/.minikube}
	I0701 12:27:22.211930  653531 buildroot.go:174] setting up certificates
	I0701 12:27:22.211938  653531 provision.go:84] configureAuth start
	I0701 12:27:22.211947  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetMachineName
	I0701 12:27:22.212269  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetIP
	I0701 12:27:22.215120  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:22.215523  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:22.215555  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:22.215810  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHHostname
	I0701 12:27:22.218161  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:22.218800  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:22.218836  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:22.219044  653531 provision.go:143] copyHostCerts
	I0701 12:27:22.219086  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem
	I0701 12:27:22.219130  653531 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem, removing ...
	I0701 12:27:22.219141  653531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem
	I0701 12:27:22.219226  653531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem (1078 bytes)
	I0701 12:27:22.219330  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem
	I0701 12:27:22.219356  653531 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem, removing ...
	I0701 12:27:22.219365  653531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem
	I0701 12:27:22.219402  653531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem (1123 bytes)
	I0701 12:27:22.219472  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem
	I0701 12:27:22.219497  653531 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem, removing ...
	I0701 12:27:22.219503  653531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem
	I0701 12:27:22.219534  653531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem (1675 bytes)
	I0701 12:27:22.219602  653531 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem org=jenkins.ha-735960-m04 san=[127.0.0.1 192.168.39.60 ha-735960-m04 localhost minikube]
	I0701 12:27:22.329827  653531 provision.go:177] copyRemoteCerts
	I0701 12:27:22.329892  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0701 12:27:22.329923  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHHostname
	I0701 12:27:22.332967  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:22.333373  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:22.333406  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:22.333651  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHPort
	I0701 12:27:22.333896  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:22.334062  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHUsername
	I0701 12:27:22.334281  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m04/id_rsa Username:docker}
	I0701 12:27:22.417286  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0701 12:27:22.417383  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0701 12:27:22.441229  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0701 12:27:22.441316  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0701 12:27:22.465192  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0701 12:27:22.465262  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0701 12:27:22.489482  653531 provision.go:87] duration metric: took 277.524425ms to configureAuth
	I0701 12:27:22.489525  653531 buildroot.go:189] setting minikube options for container-runtime
	I0701 12:27:22.489832  653531 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:27:22.489882  653531 main.go:141] libmachine: (ha-735960-m04) Calling .DriverName
	I0701 12:27:22.490191  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHHostname
	I0701 12:27:22.493387  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:22.493808  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:22.493842  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:22.494001  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHPort
	I0701 12:27:22.494272  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:22.494482  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:22.494666  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHUsername
	I0701 12:27:22.494871  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:27:22.495082  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0701 12:27:22.495096  653531 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0701 12:27:22.603693  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0701 12:27:22.603722  653531 buildroot.go:70] root file system type: tmpfs
	I0701 12:27:22.603868  653531 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0701 12:27:22.603921  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHHostname
	I0701 12:27:22.606932  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:22.607406  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:22.607441  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:22.607659  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHPort
	I0701 12:27:22.607881  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:22.608030  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:22.608161  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHUsername
	I0701 12:27:22.608332  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:27:22.608539  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0701 12:27:22.608607  653531 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.39.16"
	Environment="NO_PROXY=192.168.39.16,192.168.39.86"
	Environment="NO_PROXY=192.168.39.16,192.168.39.86,192.168.39.97"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0701 12:27:22.729176  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.39.16
	Environment=NO_PROXY=192.168.39.16,192.168.39.86
	Environment=NO_PROXY=192.168.39.16,192.168.39.86,192.168.39.97
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0701 12:27:22.729234  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHHostname
	I0701 12:27:22.732936  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:22.733425  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:22.733462  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:22.733653  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHPort
	I0701 12:27:22.733908  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:22.734181  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:22.734376  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHUsername
	I0701 12:27:22.734607  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:27:22.734842  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0701 12:27:22.734871  653531 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0701 12:27:24.534039  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0701 12:27:24.534075  653531 machine.go:97] duration metric: took 2.690748128s to provisionDockerMachine
	I0701 12:27:24.534091  653531 start.go:293] postStartSetup for "ha-735960-m04" (driver="kvm2")
	I0701 12:27:24.534104  653531 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0701 12:27:24.534123  653531 main.go:141] libmachine: (ha-735960-m04) Calling .DriverName
	I0701 12:27:24.534499  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0701 12:27:24.534541  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHHostname
	I0701 12:27:24.537254  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:24.537740  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:24.537779  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:24.537959  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHPort
	I0701 12:27:24.538181  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:24.538373  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHUsername
	I0701 12:27:24.538597  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m04/id_rsa Username:docker}
	I0701 12:27:24.622239  653531 ssh_runner.go:195] Run: cat /etc/os-release
	I0701 12:27:24.626566  653531 info.go:137] Remote host: Buildroot 2023.02.9
	I0701 12:27:24.626597  653531 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-630650/.minikube/addons for local assets ...
	I0701 12:27:24.626682  653531 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-630650/.minikube/files for local assets ...
	I0701 12:27:24.626776  653531 filesync.go:149] local asset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> 6378542.pem in /etc/ssl/certs
	I0701 12:27:24.626790  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> /etc/ssl/certs/6378542.pem
	I0701 12:27:24.626899  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0701 12:27:24.638615  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem --> /etc/ssl/certs/6378542.pem (1708 bytes)
	I0701 12:27:24.662568  653531 start.go:296] duration metric: took 128.459164ms for postStartSetup
	I0701 12:27:24.662618  653531 main.go:141] libmachine: (ha-735960-m04) Calling .DriverName
	I0701 12:27:24.663010  653531 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0701 12:27:24.663051  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHHostname
	I0701 12:27:24.665748  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:24.666087  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:24.666114  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:24.666265  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHPort
	I0701 12:27:24.666549  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:24.666727  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHUsername
	I0701 12:27:24.666943  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m04/id_rsa Username:docker}
	I0701 12:27:24.753987  653531 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0701 12:27:24.754081  653531 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0701 12:27:24.791910  653531 fix.go:56] duration metric: took 18.863722464s for fixHost
	I0701 12:27:24.791970  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHHostname
	I0701 12:27:24.795473  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:24.795824  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:24.795860  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:24.796063  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHPort
	I0701 12:27:24.796321  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:24.796518  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:24.796690  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHUsername
	I0701 12:27:24.796892  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:27:24.797130  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0701 12:27:24.797146  653531 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0701 12:27:24.911069  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719836844.884316737
	
	I0701 12:27:24.911100  653531 fix.go:216] guest clock: 1719836844.884316737
	I0701 12:27:24.911110  653531 fix.go:229] Guest: 2024-07-01 12:27:24.884316737 +0000 UTC Remote: 2024-07-01 12:27:24.791945819 +0000 UTC m=+202.261797488 (delta=92.370918ms)
	I0701 12:27:24.911131  653531 fix.go:200] guest clock delta is within tolerance: 92.370918ms
	I0701 12:27:24.911137  653531 start.go:83] releasing machines lock for "ha-735960-m04", held for 18.982986548s
	I0701 12:27:24.911163  653531 main.go:141] libmachine: (ha-735960-m04) Calling .DriverName
	I0701 12:27:24.911481  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetIP
	I0701 12:27:24.914298  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:24.914691  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:24.914721  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:24.917119  653531 out.go:177] * Found network options:
	I0701 12:27:24.918569  653531 out.go:177]   - NO_PROXY=192.168.39.16,192.168.39.86,192.168.39.97
	W0701 12:27:24.919961  653531 proxy.go:119] fail to check proxy env: Error ip not in block
	W0701 12:27:24.919987  653531 proxy.go:119] fail to check proxy env: Error ip not in block
	W0701 12:27:24.919997  653531 proxy.go:119] fail to check proxy env: Error ip not in block
	I0701 12:27:24.920012  653531 main.go:141] libmachine: (ha-735960-m04) Calling .DriverName
	I0701 12:27:24.920847  653531 main.go:141] libmachine: (ha-735960-m04) Calling .DriverName
	I0701 12:27:24.921063  653531 main.go:141] libmachine: (ha-735960-m04) Calling .DriverName
	I0701 12:27:24.921170  653531 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0701 12:27:24.921210  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHHostname
	W0701 12:27:24.921252  653531 proxy.go:119] fail to check proxy env: Error ip not in block
	W0701 12:27:24.921277  653531 proxy.go:119] fail to check proxy env: Error ip not in block
	W0701 12:27:24.921290  653531 proxy.go:119] fail to check proxy env: Error ip not in block
	I0701 12:27:24.921364  653531 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0701 12:27:24.921385  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHHostname
	I0701 12:27:24.924253  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:24.924561  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:24.924715  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:24.924742  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:24.924933  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHPort
	I0701 12:27:24.925058  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:24.925080  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:24.925110  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:24.925325  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHUsername
	I0701 12:27:24.925339  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHPort
	I0701 12:27:24.925519  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:24.925615  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m04/id_rsa Username:docker}
	I0701 12:27:24.925685  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHUsername
	I0701 12:27:24.925840  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m04/id_rsa Username:docker}
	W0701 12:27:25.004044  653531 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0701 12:27:25.004109  653531 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0701 12:27:25.029712  653531 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0701 12:27:25.029746  653531 start.go:494] detecting cgroup driver to use...
	I0701 12:27:25.029880  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 12:27:25.052034  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0701 12:27:25.062847  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0701 12:27:25.073005  653531 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0701 12:27:25.073080  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0701 12:27:25.083300  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0701 12:27:25.093834  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0701 12:27:25.104814  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0701 12:27:25.115006  653531 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0701 12:27:25.126080  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0701 12:27:25.136492  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0701 12:27:25.147986  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0701 12:27:25.158638  653531 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0701 12:27:25.168301  653531 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0701 12:27:25.177427  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:27:25.290645  653531 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0701 12:27:25.317946  653531 start.go:494] detecting cgroup driver to use...
	I0701 12:27:25.318090  653531 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0701 12:27:25.333522  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 12:27:25.349308  653531 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0701 12:27:25.366057  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 12:27:25.379554  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 12:27:25.393005  653531 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0701 12:27:25.427883  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 12:27:25.443710  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 12:27:25.462653  653531 ssh_runner.go:195] Run: which cri-dockerd
	I0701 12:27:25.466440  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0701 12:27:25.475817  653531 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0701 12:27:25.491900  653531 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0701 12:27:25.609810  653531 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0701 12:27:25.736607  653531 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0701 12:27:25.736666  653531 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0701 12:27:25.753218  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:27:25.872913  653531 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0701 12:27:28.274644  653531 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.401692528s)
	I0701 12:27:28.274730  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0701 12:27:28.288270  653531 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0701 12:27:28.306360  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0701 12:27:28.320063  653531 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0701 12:27:28.444909  653531 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0701 12:27:28.582500  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:27:28.708064  653531 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0701 12:27:28.728173  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0701 12:27:28.743660  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:27:28.873765  653531 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0701 12:27:28.960958  653531 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0701 12:27:28.961063  653531 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0701 12:27:28.967089  653531 start.go:562] Will wait 60s for crictl version
	I0701 12:27:28.967205  653531 ssh_runner.go:195] Run: which crictl
	I0701 12:27:28.971404  653531 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0701 12:27:29.011615  653531 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.1
	RuntimeApiVersion:  v1
	I0701 12:27:29.011699  653531 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0701 12:27:29.041339  653531 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0701 12:27:29.073461  653531 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.1 ...
	I0701 12:27:29.075110  653531 out.go:177]   - env NO_PROXY=192.168.39.16
	I0701 12:27:29.076621  653531 out.go:177]   - env NO_PROXY=192.168.39.16,192.168.39.86
	I0701 12:27:29.078186  653531 out.go:177]   - env NO_PROXY=192.168.39.16,192.168.39.86,192.168.39.97
	I0701 12:27:29.079949  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetIP
	I0701 12:27:29.083268  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:29.083683  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:29.083711  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:29.084018  653531 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0701 12:27:29.088562  653531 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 12:27:29.105010  653531 mustload.go:65] Loading cluster: ha-735960
	I0701 12:27:29.105303  653531 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:27:29.105654  653531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:27:29.105708  653531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:27:29.121628  653531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39389
	I0701 12:27:29.122222  653531 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:27:29.122816  653531 main.go:141] libmachine: Using API Version  1
	I0701 12:27:29.122844  653531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:27:29.123210  653531 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:27:29.123475  653531 main.go:141] libmachine: (ha-735960) Calling .GetState
	I0701 12:27:29.125364  653531 host.go:66] Checking if "ha-735960" exists ...
	I0701 12:27:29.125670  653531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:27:29.125708  653531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:27:29.141532  653531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36053
	I0701 12:27:29.142051  653531 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:27:29.142638  653531 main.go:141] libmachine: Using API Version  1
	I0701 12:27:29.142662  653531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:27:29.143010  653531 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:27:29.143254  653531 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:27:29.143488  653531 certs.go:68] Setting up /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960 for IP: 192.168.39.60
	I0701 12:27:29.143501  653531 certs.go:194] generating shared ca certs ...
	I0701 12:27:29.143518  653531 certs.go:226] acquiring lock for ca certs: {Name:mk34e166bfd069e523b2325e14d1812c523bff53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:27:29.143646  653531 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key
	I0701 12:27:29.143686  653531 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key
	I0701 12:27:29.143702  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0701 12:27:29.143722  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0701 12:27:29.143739  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0701 12:27:29.143757  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0701 12:27:29.143817  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem (1338 bytes)
	W0701 12:27:29.143851  653531 certs.go:480] ignoring /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854_empty.pem, impossibly tiny 0 bytes
	I0701 12:27:29.143871  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem (1675 bytes)
	I0701 12:27:29.143894  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem (1078 bytes)
	I0701 12:27:29.143916  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem (1123 bytes)
	I0701 12:27:29.143937  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem (1675 bytes)
	I0701 12:27:29.143972  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem (1708 bytes)
	I0701 12:27:29.144004  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:27:29.144021  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem -> /usr/share/ca-certificates/637854.pem
	I0701 12:27:29.144041  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> /usr/share/ca-certificates/6378542.pem
	I0701 12:27:29.144072  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0701 12:27:29.171419  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0701 12:27:29.196509  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0701 12:27:29.222599  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0701 12:27:29.248989  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0701 12:27:29.275034  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem --> /usr/share/ca-certificates/637854.pem (1338 bytes)
	I0701 12:27:29.300102  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem --> /usr/share/ca-certificates/6378542.pem (1708 bytes)
	I0701 12:27:29.327329  653531 ssh_runner.go:195] Run: openssl version
	I0701 12:27:29.333121  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0701 12:27:29.344555  653531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:27:29.349319  653531 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  1 12:05 /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:27:29.349394  653531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:27:29.355247  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0701 12:27:29.366285  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/637854.pem && ln -fs /usr/share/ca-certificates/637854.pem /etc/ssl/certs/637854.pem"
	I0701 12:27:29.376931  653531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/637854.pem
	I0701 12:27:29.381303  653531 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  1 12:11 /usr/share/ca-certificates/637854.pem
	I0701 12:27:29.381385  653531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/637854.pem
	I0701 12:27:29.387458  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/637854.pem /etc/ssl/certs/51391683.0"
	I0701 12:27:29.398343  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6378542.pem && ln -fs /usr/share/ca-certificates/6378542.pem /etc/ssl/certs/6378542.pem"
	I0701 12:27:29.409321  653531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6378542.pem
	I0701 12:27:29.414299  653531 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  1 12:11 /usr/share/ca-certificates/6378542.pem
	I0701 12:27:29.414400  653531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6378542.pem
	I0701 12:27:29.420975  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6378542.pem /etc/ssl/certs/3ec20f2e.0"
	I0701 12:27:29.434286  653531 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0701 12:27:29.438767  653531 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0701 12:27:29.438817  653531 kubeadm.go:928] updating node {m04 192.168.39.60 0 v1.30.2 docker false true} ...
	I0701 12:27:29.438918  653531 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-735960-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.60
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-735960 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0701 12:27:29.438988  653531 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0701 12:27:29.450811  653531 binaries.go:44] Found k8s binaries, skipping transfer
	I0701 12:27:29.450895  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0701 12:27:29.462511  653531 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0701 12:27:29.480246  653531 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0701 12:27:29.497624  653531 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0701 12:27:29.502554  653531 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 12:27:29.515005  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:27:29.648948  653531 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0701 12:27:29.668809  653531 start.go:234] Will wait 6m0s for node &{Name:m04 IP:192.168.39.60 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0701 12:27:29.669186  653531 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:27:29.671772  653531 out.go:177] * Verifying Kubernetes components...
	I0701 12:27:29.673288  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:27:29.823420  653531 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0701 12:27:29.839349  653531 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19166-630650/kubeconfig
	I0701 12:27:29.839675  653531 kapi.go:59] client config for ha-735960: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/client.crt", KeyFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/client.key", CAFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfbb80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0701 12:27:29.839746  653531 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.16:8443
	I0701 12:27:29.840001  653531 node_ready.go:35] waiting up to 6m0s for node "ha-735960-m04" to be "Ready" ...
	I0701 12:27:29.840108  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m04
	I0701 12:27:29.840118  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:29.840130  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:29.840138  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:29.843740  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:30.340654  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m04
	I0701 12:27:30.340679  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:30.340687  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:30.340691  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:30.344079  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:30.344547  653531 node_ready.go:49] node "ha-735960-m04" has status "Ready":"True"
	I0701 12:27:30.344570  653531 node_ready.go:38] duration metric: took 504.547887ms for node "ha-735960-m04" to be "Ready" ...
	I0701 12:27:30.344579  653531 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 12:27:30.344650  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods
	I0701 12:27:30.344660  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:30.344668  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:30.344675  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:30.351108  653531 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0701 12:27:30.358660  653531 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:30.358749  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:27:30.358758  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:30.358766  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:30.358771  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:30.362032  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:30.362784  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:27:30.362802  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:30.362812  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:30.362816  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:30.365450  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:27:30.365914  653531 pod_ready.go:92] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:30.365936  653531 pod_ready.go:81] duration metric: took 7.248792ms for pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:30.365949  653531 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-p4rtz" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:30.366016  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-p4rtz
	I0701 12:27:30.366025  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:30.366035  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:30.366043  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:30.368928  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:27:30.369820  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:27:30.369836  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:30.369843  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:30.369858  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:30.373004  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:30.373769  653531 pod_ready.go:92] pod "coredns-7db6d8ff4d-p4rtz" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:30.373785  653531 pod_ready.go:81] duration metric: took 7.830149ms for pod "coredns-7db6d8ff4d-p4rtz" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:30.373794  653531 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:30.373848  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960
	I0701 12:27:30.373856  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:30.373862  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:30.373867  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:30.376565  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:27:30.377340  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:27:30.377356  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:30.377363  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:30.377367  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:30.379523  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:27:30.379966  653531 pod_ready.go:92] pod "etcd-ha-735960" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:30.379982  653531 pod_ready.go:81] duration metric: took 6.178731ms for pod "etcd-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:30.379991  653531 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:30.380048  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m02
	I0701 12:27:30.380055  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:30.380062  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:30.380069  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:30.382485  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:27:30.383125  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:27:30.383141  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:30.383148  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:30.383155  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:30.385845  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:27:30.386599  653531 pod_ready.go:92] pod "etcd-ha-735960-m02" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:30.386616  653531 pod_ready.go:81] duration metric: took 6.619715ms for pod "etcd-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:30.386624  653531 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:30.541077  653531 request.go:629] Waited for 154.380092ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:27:30.541196  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:27:30.541207  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:30.541219  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:30.541229  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:30.544660  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:30.740754  653531 request.go:629] Waited for 195.337132ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:30.740847  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:30.740857  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:30.740865  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:30.740869  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:30.744492  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:30.745072  653531 pod_ready.go:92] pod "etcd-ha-735960-m03" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:30.745094  653531 pod_ready.go:81] duration metric: took 358.462325ms for pod "etcd-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:30.745123  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:30.941364  653531 request.go:629] Waited for 196.100673ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960
	I0701 12:27:30.941453  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960
	I0701 12:27:30.941466  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:30.941477  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:30.941487  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:30.946577  653531 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0701 12:27:31.140711  653531 request.go:629] Waited for 193.223112ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:27:31.140788  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:27:31.140793  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:31.140800  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:31.140804  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:31.146571  653531 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0701 12:27:31.147245  653531 pod_ready.go:92] pod "kube-apiserver-ha-735960" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:31.147269  653531 pod_ready.go:81] duration metric: took 402.135058ms for pod "kube-apiserver-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:31.147280  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:31.341367  653531 request.go:629] Waited for 193.988845ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m02
	I0701 12:27:31.341477  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m02
	I0701 12:27:31.341489  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:31.341500  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:31.341508  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:31.345561  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:27:31.540709  653531 request.go:629] Waited for 194.115472ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:27:31.540784  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:27:31.540789  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:31.540797  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:31.540800  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:31.544920  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:27:31.545652  653531 pod_ready.go:92] pod "kube-apiserver-ha-735960-m02" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:31.545679  653531 pod_ready.go:81] duration metric: took 398.391166ms for pod "kube-apiserver-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:31.545689  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:31.741170  653531 request.go:629] Waited for 195.369232ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:27:31.741243  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:27:31.741251  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:31.741261  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:31.741272  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:31.745382  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:27:31.941422  653531 request.go:629] Waited for 195.397431ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:31.941512  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:31.941517  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:31.941526  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:31.941531  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:31.945358  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:31.945947  653531 pod_ready.go:92] pod "kube-apiserver-ha-735960-m03" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:31.945971  653531 pod_ready.go:81] duration metric: took 400.276204ms for pod "kube-apiserver-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:31.945982  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:32.140926  653531 request.go:629] Waited for 194.860847ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-735960
	I0701 12:27:32.141014  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-735960
	I0701 12:27:32.141023  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:32.141048  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:32.141058  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:32.146741  653531 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0701 12:27:32.341040  653531 request.go:629] Waited for 193.334578ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:27:32.341112  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:27:32.341117  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:32.341126  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:32.341132  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:32.344664  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:32.345182  653531 pod_ready.go:92] pod "kube-controller-manager-ha-735960" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:32.345200  653531 pod_ready.go:81] duration metric: took 399.209545ms for pod "kube-controller-manager-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:32.345210  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:32.541314  653531 request.go:629] Waited for 196.016373ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-735960-m02
	I0701 12:27:32.541395  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-735960-m02
	I0701 12:27:32.541402  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:32.541414  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:32.541424  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:32.545663  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:27:32.741118  653531 request.go:629] Waited for 194.597088ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:27:32.741201  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:27:32.741209  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:32.741220  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:32.741228  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:32.745051  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:32.745612  653531 pod_ready.go:92] pod "kube-controller-manager-ha-735960-m02" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:32.745636  653531 pod_ready.go:81] duration metric: took 400.417224ms for pod "kube-controller-manager-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:32.745651  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:32.941594  653531 request.go:629] Waited for 195.859048ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-735960-m03
	I0701 12:27:32.941697  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-735960-m03
	I0701 12:27:32.941704  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:32.941712  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:32.941720  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:32.945661  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:33.140796  653531 request.go:629] Waited for 194.297237ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:33.140872  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:33.140881  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:33.140892  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:33.140902  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:33.148523  653531 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0701 12:27:33.149119  653531 pod_ready.go:92] pod "kube-controller-manager-ha-735960-m03" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:33.149229  653531 pod_ready.go:81] duration metric: took 403.561455ms for pod "kube-controller-manager-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:33.149274  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-25ssf" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:33.341103  653531 request.go:629] Waited for 191.712414ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-25ssf
	I0701 12:27:33.341203  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-25ssf
	I0701 12:27:33.341211  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:33.341222  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:33.341236  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:33.345005  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:33.541118  653531 request.go:629] Waited for 195.201433ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m04
	I0701 12:27:33.541195  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m04
	I0701 12:27:33.541202  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:33.541212  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:33.541220  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:33.544937  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:33.546208  653531 pod_ready.go:92] pod "kube-proxy-25ssf" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:33.546231  653531 pod_ready.go:81] duration metric: took 396.932438ms for pod "kube-proxy-25ssf" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:33.546244  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-776rt" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:33.741353  653531 request.go:629] Waited for 195.026851ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-776rt
	I0701 12:27:33.741456  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-776rt
	I0701 12:27:33.741466  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:33.741475  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:33.741481  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:33.745239  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:33.941300  653531 request.go:629] Waited for 195.397929ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:33.941381  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:33.941388  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:33.941399  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:33.941408  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:33.944917  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:33.945530  653531 pod_ready.go:92] pod "kube-proxy-776rt" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:33.945551  653531 pod_ready.go:81] duration metric: took 399.299813ms for pod "kube-proxy-776rt" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:33.945565  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b6knb" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:34.140984  653531 request.go:629] Waited for 195.324742ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b6knb
	I0701 12:27:34.141050  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b6knb
	I0701 12:27:34.141055  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:34.141063  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:34.141075  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:34.144882  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:34.341131  653531 request.go:629] Waited for 195.426765ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:27:34.341198  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:27:34.341203  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:34.341211  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:34.341215  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:34.344938  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:34.345533  653531 pod_ready.go:92] pod "kube-proxy-b6knb" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:34.345554  653531 pod_ready.go:81] duration metric: took 399.982623ms for pod "kube-proxy-b6knb" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:34.345563  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lphzn" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:34.540691  653531 request.go:629] Waited for 195.046851ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lphzn
	I0701 12:27:34.540777  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lphzn
	I0701 12:27:34.540782  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:34.540794  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:34.540798  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:34.544410  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:34.741782  653531 request.go:629] Waited for 196.474041ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:27:34.741851  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:27:34.741856  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:34.741864  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:34.741869  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:34.745447  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:34.746289  653531 pod_ready.go:92] pod "kube-proxy-lphzn" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:34.746312  653531 pod_ready.go:81] duration metric: took 400.742893ms for pod "kube-proxy-lphzn" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:34.746344  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:34.941411  653531 request.go:629] Waited for 194.97877ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960
	I0701 12:27:34.941489  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960
	I0701 12:27:34.941495  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:34.941502  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:34.941510  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:34.944984  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:35.141079  653531 request.go:629] Waited for 195.409668ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:27:35.141163  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:27:35.141168  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:35.141176  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:35.141194  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:35.144737  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:35.145431  653531 pod_ready.go:92] pod "kube-scheduler-ha-735960" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:35.145471  653531 pod_ready.go:81] duration metric: took 399.115782ms for pod "kube-scheduler-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:35.145485  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:35.341554  653531 request.go:629] Waited for 195.979537ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960-m02
	I0701 12:27:35.341639  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960-m02
	I0701 12:27:35.341650  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:35.341661  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:35.341672  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:35.345199  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:35.541252  653531 request.go:629] Waited for 195.403848ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:27:35.541340  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:27:35.541346  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:35.541354  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:35.541362  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:35.545398  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:27:35.546010  653531 pod_ready.go:92] pod "kube-scheduler-ha-735960-m02" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:35.546037  653531 pod_ready.go:81] duration metric: took 400.543297ms for pod "kube-scheduler-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:35.546051  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:35.741442  653531 request.go:629] Waited for 195.294004ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960-m03
	I0701 12:27:35.741533  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960-m03
	I0701 12:27:35.741541  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:35.741553  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:35.741565  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:35.744725  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:35.940687  653531 request.go:629] Waited for 195.284608ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:35.940760  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:35.940766  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:35.940776  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:35.940783  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:35.944482  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:35.945011  653531 pod_ready.go:92] pod "kube-scheduler-ha-735960-m03" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:35.945032  653531 pod_ready.go:81] duration metric: took 398.973476ms for pod "kube-scheduler-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:35.945048  653531 pod_ready.go:38] duration metric: took 5.600458409s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 12:27:35.945074  653531 system_svc.go:44] waiting for kubelet service to be running ....
	I0701 12:27:35.945143  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 12:27:35.962762  653531 system_svc.go:56] duration metric: took 17.680549ms WaitForService to wait for kubelet
	I0701 12:27:35.962795  653531 kubeadm.go:576] duration metric: took 6.293928606s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 12:27:35.962817  653531 node_conditions.go:102] verifying NodePressure condition ...
	I0701 12:27:36.141286  653531 request.go:629] Waited for 178.366419ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes
	I0701 12:27:36.141375  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes
	I0701 12:27:36.141382  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:36.141394  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:36.141404  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:36.145426  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:36.146951  653531 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0701 12:27:36.146977  653531 node_conditions.go:123] node cpu capacity is 2
	I0701 12:27:36.146989  653531 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0701 12:27:36.146992  653531 node_conditions.go:123] node cpu capacity is 2
	I0701 12:27:36.146996  653531 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0701 12:27:36.146999  653531 node_conditions.go:123] node cpu capacity is 2
	I0701 12:27:36.147001  653531 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0701 12:27:36.147004  653531 node_conditions.go:123] node cpu capacity is 2
	I0701 12:27:36.147009  653531 node_conditions.go:105] duration metric: took 184.187151ms to run NodePressure ...
	I0701 12:27:36.147024  653531 start.go:240] waiting for startup goroutines ...
	I0701 12:27:36.147054  653531 start.go:254] writing updated cluster config ...
	I0701 12:27:36.147403  653531 ssh_runner.go:195] Run: rm -f paused
	I0701 12:27:36.201170  653531 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0701 12:27:36.203376  653531 out.go:177] * Done! kubectl is now configured to use "ha-735960" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jul 01 12:25:13 ha-735960 cri-dockerd[1398]: time="2024-07-01T12:25:13Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Jul 01 12:25:23 ha-735960 dockerd[1125]: time="2024-07-01T12:25:23.366654170Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 01 12:25:23 ha-735960 dockerd[1125]: time="2024-07-01T12:25:23.366710385Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 01 12:25:23 ha-735960 dockerd[1125]: time="2024-07-01T12:25:23.366723641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 12:25:23 ha-735960 dockerd[1125]: time="2024-07-01T12:25:23.367696676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 12:25:23 ha-735960 dockerd[1125]: time="2024-07-01T12:25:23.388479723Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 01 12:25:23 ha-735960 dockerd[1125]: time="2024-07-01T12:25:23.388593936Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 01 12:25:23 ha-735960 dockerd[1125]: time="2024-07-01T12:25:23.389018347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 12:25:23 ha-735960 dockerd[1125]: time="2024-07-01T12:25:23.389381366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 12:25:23 ha-735960 dockerd[1125]: time="2024-07-01T12:25:23.390771396Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 01 12:25:23 ha-735960 dockerd[1125]: time="2024-07-01T12:25:23.391192786Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 01 12:25:23 ha-735960 dockerd[1125]: time="2024-07-01T12:25:23.391291548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 12:25:23 ha-735960 dockerd[1125]: time="2024-07-01T12:25:23.391685449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 12:25:24 ha-735960 dockerd[1125]: time="2024-07-01T12:25:24.321168284Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 01 12:25:24 ha-735960 dockerd[1125]: time="2024-07-01T12:25:24.321255362Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 01 12:25:24 ha-735960 dockerd[1125]: time="2024-07-01T12:25:24.321269990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 12:25:24 ha-735960 dockerd[1125]: time="2024-07-01T12:25:24.321347198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 12:25:25 ha-735960 dockerd[1125]: time="2024-07-01T12:25:25.309227018Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 01 12:25:25 ha-735960 dockerd[1125]: time="2024-07-01T12:25:25.309334545Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 01 12:25:25 ha-735960 dockerd[1125]: time="2024-07-01T12:25:25.309346230Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 12:25:25 ha-735960 dockerd[1125]: time="2024-07-01T12:25:25.309972461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 12:25:26 ha-735960 dockerd[1125]: time="2024-07-01T12:25:26.350220788Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 01 12:25:26 ha-735960 dockerd[1125]: time="2024-07-01T12:25:26.350306647Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 01 12:25:26 ha-735960 dockerd[1125]: time="2024-07-01T12:25:26.350329844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 12:25:26 ha-735960 dockerd[1125]: time="2024-07-01T12:25:26.350448560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	51a34f4432461       6e38f40d628db                                                                                         3 minutes ago       Running             storage-provisioner       1                   d2dc46de092d5       storage-provisioner
	bf788c37e0912       ac1c61439df46                                                                                         3 minutes ago       Running             kindnet-cni               1                   afbde11b8a740       kindnet-7f6hm
	8cdf2026ed072       8c811b4aec35f                                                                                         3 minutes ago       Running             busybox                   1                   7d907d7b28c98       busybox-fc5497c4f-pjfcw
	710f5c3a9f856       53c535741fb44                                                                                         3 minutes ago       Running             kube-proxy                1                   e49ff3fb80595       kube-proxy-lphzn
	61dc29970290b       cbb01a7bd410d                                                                                         3 minutes ago       Running             coredns                   1                   de1daec45ac89       coredns-7db6d8ff4d-p4rtz
	4a151786b08f5       cbb01a7bd410d                                                                                         3 minutes ago       Running             coredns                   1                   26981372e6136       coredns-7db6d8ff4d-nk4lf
	8ee3e44a43c3b       56ce0fd9fb532                                                                                         3 minutes ago       Running             kube-apiserver            5                   1b92afc0e4763       kube-apiserver-ha-735960
	67dc946c8c45c       e874818b3caac                                                                                         3 minutes ago       Running             kube-controller-manager   5                   3379ae4b4d689       kube-controller-manager-ha-735960
	1c046b029aa4a       38af8ddebf499                                                                                         4 minutes ago       Running             kube-vip                  1                   32c93b266a82d       kube-vip-ha-735960
	693eb0b8f5d78       7820c83aa1394                                                                                         4 minutes ago       Running             kube-scheduler            2                   ec2e5d106b539       kube-scheduler-ha-735960
	ec2c061093f10       e874818b3caac                                                                                         4 minutes ago       Exited              kube-controller-manager   4                   3379ae4b4d689       kube-controller-manager-ha-735960
	852492f61fee7       3861cfcd7c04c                                                                                         4 minutes ago       Running             etcd                      2                   c9044136ea747       etcd-ha-735960
	a3cb59ee8d572       56ce0fd9fb532                                                                                         4 minutes ago       Exited              kube-apiserver            4                   1b92afc0e4763       kube-apiserver-ha-735960
	cecb3dd12e16e       38af8ddebf499                                                                                         7 minutes ago       Exited              kube-vip                  0                   8d1562fb4b8c3       kube-vip-ha-735960
	6a200a6b49020       3861cfcd7c04c                                                                                         7 minutes ago       Exited              etcd                      1                   5b1097d48d724       etcd-ha-735960
	2d71437c5f06d       7820c83aa1394                                                                                         7 minutes ago       Exited              kube-scheduler            1                   fa7dea6a1b8bd       kube-scheduler-ha-735960
	1ef6d9da6a9c5       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   10 minutes ago      Exited              busybox                   0                   1f5ccc7b0e655       busybox-fc5497c4f-pjfcw
	a9c30cd4b3455       cbb01a7bd410d                                                                                         12 minutes ago      Exited              coredns                   0                   7b4b4f7ec4b63       coredns-7db6d8ff4d-nk4lf
	769b0b8751350       cbb01a7bd410d                                                                                         12 minutes ago      Exited              coredns                   0                   7a349370d4f88       coredns-7db6d8ff4d-p4rtz
	f472aef5302fd       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              12 minutes ago      Exited              kindnet-cni               0                   ab9c74a502295       kindnet-7f6hm
	6116abe6039dc       53c535741fb44                                                                                         12 minutes ago      Exited              kube-proxy                0                   da69191059798       kube-proxy-lphzn
	
	
	==> coredns [4a151786b08f] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:47509 - 49224 "HINFO IN 6979381009676685748.1822735874857968465. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.033568754s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[177456986]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (01-Jul-2024 12:25:23.743) (total time: 30001ms):
	Trace[177456986]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (12:25:53.744)
	Trace[177456986]: [30.001445665s] [30.001445665s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[947462717]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (01-Jul-2024 12:25:23.742) (total time: 30003ms):
	Trace[947462717]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (12:25:53.743)
	Trace[947462717]: [30.0032009s] [30.0032009s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[886534813]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (01-Jul-2024 12:25:23.742) (total time: 30004ms):
	Trace[886534813]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (12:25:53.745)
	Trace[886534813]: [30.004749172s] [30.004749172s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [61dc29970290] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:49574 - 32592 "HINFO IN 7534101530096432962.1842168600618500663. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017366932s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[2027452150]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (01-Jul-2024 12:25:23.742) (total time: 30003ms):
	Trace[2027452150]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (12:25:53.743)
	Trace[2027452150]: [30.003896779s] [30.003896779s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[222503702]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (01-Jul-2024 12:25:23.743) (total time: 30003ms):
	Trace[222503702]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (12:25:53.744)
	Trace[222503702]: [30.003901467s] [30.003901467s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1950728267]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (01-Jul-2024 12:25:23.742) (total time: 30005ms):
	Trace[1950728267]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (12:25:53.745)
	Trace[1950728267]: [30.005235099s] [30.005235099s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [769b0b875135] <==
	[INFO] 10.244.1.2:44221 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000082797s
	[INFO] 10.244.2.2:33797 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157729s
	[INFO] 10.244.2.2:52590 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004055351s
	[INFO] 10.244.2.2:46983 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003253494s
	[INFO] 10.244.2.2:56187 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000205215s
	[INFO] 10.244.2.2:41086 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000158307s
	[INFO] 10.244.0.4:47783 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000097077s
	[INFO] 10.244.0.4:50743 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001523s
	[INFO] 10.244.0.4:37141 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000138763s
	[INFO] 10.244.1.2:32981 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000132906s
	[INFO] 10.244.1.2:36762 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001646552s
	[INFO] 10.244.1.2:33583 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000072434s
	[INFO] 10.244.2.2:37027 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000156518s
	[INFO] 10.244.2.2:58435 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000104504s
	[INFO] 10.244.2.2:36107 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090251s
	[INFO] 10.244.0.4:44792 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000227164s
	[INFO] 10.244.0.4:56557 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000140925s
	[INFO] 10.244.1.2:38284 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000232717s
	[INFO] 10.244.2.2:37664 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135198s
	[INFO] 10.244.2.2:60876 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00032392s
	[INFO] 10.244.1.2:37461 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133264s
	[INFO] 10.244.1.2:45182 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000117372s
	[INFO] 10.244.1.2:37156 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000240093s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a9c30cd4b345] <==
	[INFO] 10.244.0.4:57095 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002251804s
	[INFO] 10.244.0.4:42381 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000081215s
	[INFO] 10.244.0.4:53499 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00124929s
	[INFO] 10.244.0.4:41287 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000174281s
	[INFO] 10.244.0.4:36433 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000142863s
	[INFO] 10.244.1.2:47688 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130034s
	[INFO] 10.244.1.2:40562 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00183587s
	[INFO] 10.244.1.2:35137 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000771s
	[INFO] 10.244.1.2:37798 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000184282s
	[INFO] 10.244.1.2:43876 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008807s
	[INFO] 10.244.2.2:35039 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119303s
	[INFO] 10.244.0.4:53229 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000090292s
	[INFO] 10.244.0.4:42097 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00011308s
	[INFO] 10.244.1.2:42114 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130767s
	[INFO] 10.244.1.2:56638 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000110707s
	[INFO] 10.244.1.2:55805 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000093484s
	[INFO] 10.244.2.2:51675 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000145117s
	[INFO] 10.244.2.2:56838 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000136843s
	[INFO] 10.244.0.4:60951 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000162889s
	[INFO] 10.244.0.4:34776 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000112367s
	[INFO] 10.244.0.4:45397 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000073771s
	[INFO] 10.244.0.4:52372 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000058127s
	[INFO] 10.244.1.2:41033 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000131962s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-735960
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-735960
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f87dc4b1adfa3faf30393d14b8f7fb9acc5e991c
	                    minikube.k8s.io/name=ha-735960
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_01T12_15_55_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Jul 2024 12:15:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-735960
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Jul 2024 12:28:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Jul 2024 12:25:13 +0000   Mon, 01 Jul 2024 12:15:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Jul 2024 12:25:13 +0000   Mon, 01 Jul 2024 12:15:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Jul 2024 12:25:13 +0000   Mon, 01 Jul 2024 12:15:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Jul 2024 12:25:13 +0000   Mon, 01 Jul 2024 12:16:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.16
	  Hostname:    ha-735960
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a500128d5645446baeea5654afbcb060
	  System UUID:                a500128d-5645-446b-aeea-5654afbcb060
	  Boot ID:                    a9ffe936-2356-415e-aa5e-ceedcf15ed72
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-pjfcw              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 coredns-7db6d8ff4d-nk4lf             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     12m
	  kube-system                 coredns-7db6d8ff4d-p4rtz             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     12m
	  kube-system                 etcd-ha-735960                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-7f6hm                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-735960             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-735960    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-lphzn                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-735960             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-735960                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m57s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m42s                  kube-proxy       
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  NodeHasSufficientPID     13m                    kubelet          Node ha-735960 status is now: NodeHasSufficientPID
	  Normal  Starting                 13m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m                    kubelet          Node ha-735960 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                    kubelet          Node ha-735960 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           13m                    node-controller  Node ha-735960 event: Registered Node ha-735960 in Controller
	  Normal  NodeReady                12m                    kubelet          Node ha-735960 status is now: NodeReady
	  Normal  RegisteredNode           11m                    node-controller  Node ha-735960 event: Registered Node ha-735960 in Controller
	  Normal  RegisteredNode           10m                    node-controller  Node ha-735960 event: Registered Node ha-735960 in Controller
	  Normal  RegisteredNode           8m31s                  node-controller  Node ha-735960 event: Registered Node ha-735960 in Controller
	  Normal  Starting                 4m40s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m40s (x8 over 4m40s)  kubelet          Node ha-735960 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m40s (x8 over 4m40s)  kubelet          Node ha-735960 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m40s (x7 over 4m40s)  kubelet          Node ha-735960 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m40s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m53s                  node-controller  Node ha-735960 event: Registered Node ha-735960 in Controller
	  Normal  RegisteredNode           3m42s                  node-controller  Node ha-735960 event: Registered Node ha-735960 in Controller
	  Normal  RegisteredNode           2m6s                   node-controller  Node ha-735960 event: Registered Node ha-735960 in Controller
	  Normal  RegisteredNode           10s                    node-controller  Node ha-735960 event: Registered Node ha-735960 in Controller
	
	
	Name:               ha-735960-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-735960-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f87dc4b1adfa3faf30393d14b8f7fb9acc5e991c
	                    minikube.k8s.io/name=ha-735960
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_01T12_17_00_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Jul 2024 12:16:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-735960-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Jul 2024 12:29:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Jul 2024 12:25:08 +0000   Mon, 01 Jul 2024 12:16:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Jul 2024 12:25:08 +0000   Mon, 01 Jul 2024 12:16:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Jul 2024 12:25:08 +0000   Mon, 01 Jul 2024 12:16:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Jul 2024 12:25:08 +0000   Mon, 01 Jul 2024 12:17:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.86
	  Hostname:    ha-735960-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 58cf4e4771994f2084a06f7d76199172
	  System UUID:                58cf4e47-7199-4f20-84a0-6f7d76199172
	  Boot ID:                    41c32de2-f03a-41e4-b332-91dc3dc2ccaf
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-twnb4                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-735960-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-bztzv                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-735960-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-735960-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-b6knb                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-735960-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-735960-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 3m47s                  kube-proxy       
	  Normal   Starting                 8m44s                  kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node ha-735960-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-735960-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-735960-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           12m                    node-controller  Node ha-735960-m02 event: Registered Node ha-735960-m02 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-735960-m02 event: Registered Node ha-735960-m02 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-735960-m02 event: Registered Node ha-735960-m02 in Controller
	  Warning  Rebooted                 8m49s                  kubelet          Node ha-735960-m02 has been rebooted, boot id: 64290a4a-a20d-436b-8567-0d3e8b822776
	  Normal   NodeHasSufficientPID     8m49s                  kubelet          Node ha-735960-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    8m49s                  kubelet          Node ha-735960-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 8m49s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  8m49s                  kubelet          Node ha-735960-m02 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           8m31s                  node-controller  Node ha-735960-m02 event: Registered Node ha-735960-m02 in Controller
	  Normal   Starting                 4m16s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  4m16s (x8 over 4m16s)  kubelet          Node ha-735960-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m16s (x8 over 4m16s)  kubelet          Node ha-735960-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m16s (x7 over 4m16s)  kubelet          Node ha-735960-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  4m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           3m53s                  node-controller  Node ha-735960-m02 event: Registered Node ha-735960-m02 in Controller
	  Normal   RegisteredNode           3m42s                  node-controller  Node ha-735960-m02 event: Registered Node ha-735960-m02 in Controller
	  Normal   RegisteredNode           2m6s                   node-controller  Node ha-735960-m02 event: Registered Node ha-735960-m02 in Controller
	  Normal   RegisteredNode           10s                    node-controller  Node ha-735960-m02 event: Registered Node ha-735960-m02 in Controller
	
	
	Name:               ha-735960-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-735960-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f87dc4b1adfa3faf30393d14b8f7fb9acc5e991c
	                    minikube.k8s.io/name=ha-735960
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_01T12_18_11_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Jul 2024 12:18:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-735960-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Jul 2024 12:29:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Jul 2024 12:26:42 +0000   Mon, 01 Jul 2024 12:26:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Jul 2024 12:26:42 +0000   Mon, 01 Jul 2024 12:26:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Jul 2024 12:26:42 +0000   Mon, 01 Jul 2024 12:26:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Jul 2024 12:26:42 +0000   Mon, 01 Jul 2024 12:26:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.97
	  Hostname:    ha-735960-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 995d5c3b59f847378d8e94e940e73ad6
	  System UUID:                995d5c3b-59f8-4737-8d8e-94e940e73ad6
	  Boot ID:                    bc7ccd53-413f-4b49-a89c-18c93eb90ad9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-cpsct                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-735960-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-2424m                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-ha-735960-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-ha-735960-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-776rt                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-ha-735960-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-vip-ha-735960-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m19s                  kube-proxy       
	  Normal   Starting                 10m                    kube-proxy       
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)      kubelet          Node ha-735960-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node ha-735960-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node ha-735960-m03 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           10m                    node-controller  Node ha-735960-m03 event: Registered Node ha-735960-m03 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-735960-m03 event: Registered Node ha-735960-m03 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-735960-m03 event: Registered Node ha-735960-m03 in Controller
	  Normal   RegisteredNode           8m31s                  node-controller  Node ha-735960-m03 event: Registered Node ha-735960-m03 in Controller
	  Normal   RegisteredNode           3m53s                  node-controller  Node ha-735960-m03 event: Registered Node ha-735960-m03 in Controller
	  Normal   RegisteredNode           3m42s                  node-controller  Node ha-735960-m03 event: Registered Node ha-735960-m03 in Controller
	  Normal   NodeNotReady             3m13s                  node-controller  Node ha-735960-m03 status is now: NodeNotReady
	  Normal   Starting                 2m24s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m24s (x3 over 2m24s)  kubelet          Node ha-735960-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m24s (x3 over 2m24s)  kubelet          Node ha-735960-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m24s (x3 over 2m24s)  kubelet          Node ha-735960-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m24s (x2 over 2m24s)  kubelet          Node ha-735960-m03 has been rebooted, boot id: bc7ccd53-413f-4b49-a89c-18c93eb90ad9
	  Normal   NodeReady                2m24s (x2 over 2m24s)  kubelet          Node ha-735960-m03 status is now: NodeReady
	  Normal   RegisteredNode           2m6s                   node-controller  Node ha-735960-m03 event: Registered Node ha-735960-m03 in Controller
	  Normal   RegisteredNode           10s                    node-controller  Node ha-735960-m03 event: Registered Node ha-735960-m03 in Controller
	
	
	Name:               ha-735960-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-735960-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f87dc4b1adfa3faf30393d14b8f7fb9acc5e991c
	                    minikube.k8s.io/name=ha-735960
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_01T12_19_10_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Jul 2024 12:19:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-735960-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Jul 2024 12:29:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Jul 2024 12:27:30 +0000   Mon, 01 Jul 2024 12:27:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Jul 2024 12:27:30 +0000   Mon, 01 Jul 2024 12:27:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Jul 2024 12:27:30 +0000   Mon, 01 Jul 2024 12:27:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Jul 2024 12:27:30 +0000   Mon, 01 Jul 2024 12:27:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.60
	  Hostname:    ha-735960-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fd9ce62e425d4b9a9ba9ce7045362f6f
	  System UUID:                fd9ce62e-425d-4b9a-9ba9-ce7045362f6f
	  Boot ID:                    ac395c38-b578-4b7c-8c31-9939ff570d11
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-6gx8s       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m57s
	  kube-system                 kube-proxy-25ssf    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 94s                    kube-proxy       
	  Normal   Starting                 9m50s                  kube-proxy       
	  Normal   NodeAllocatableEnforced  9m57s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  9m57s (x2 over 9m57s)  kubelet          Node ha-735960-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m57s (x2 over 9m57s)  kubelet          Node ha-735960-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m57s (x2 over 9m57s)  kubelet          Node ha-735960-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           9m56s                  node-controller  Node ha-735960-m04 event: Registered Node ha-735960-m04 in Controller
	  Normal   RegisteredNode           9m55s                  node-controller  Node ha-735960-m04 event: Registered Node ha-735960-m04 in Controller
	  Normal   RegisteredNode           9m55s                  node-controller  Node ha-735960-m04 event: Registered Node ha-735960-m04 in Controller
	  Normal   NodeReady                9m45s                  kubelet          Node ha-735960-m04 status is now: NodeReady
	  Normal   RegisteredNode           8m31s                  node-controller  Node ha-735960-m04 event: Registered Node ha-735960-m04 in Controller
	  Normal   RegisteredNode           3m53s                  node-controller  Node ha-735960-m04 event: Registered Node ha-735960-m04 in Controller
	  Normal   RegisteredNode           3m42s                  node-controller  Node ha-735960-m04 event: Registered Node ha-735960-m04 in Controller
	  Normal   NodeNotReady             3m13s                  node-controller  Node ha-735960-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           2m6s                   node-controller  Node ha-735960-m04 event: Registered Node ha-735960-m04 in Controller
	  Normal   Starting                 97s                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  97s                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  96s (x2 over 96s)      kubelet          Node ha-735960-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    96s (x2 over 96s)      kubelet          Node ha-735960-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     96s (x2 over 96s)      kubelet          Node ha-735960-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 96s                    kubelet          Node ha-735960-m04 has been rebooted, boot id: ac395c38-b578-4b7c-8c31-9939ff570d11
	  Normal   NodeReady                96s                    kubelet          Node ha-735960-m04 status is now: NodeReady
	  Normal   RegisteredNode           10s                    node-controller  Node ha-735960-m04 event: Registered Node ha-735960-m04 in Controller
	
	
	Name:               ha-735960-m05
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-735960-m05
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f87dc4b1adfa3faf30393d14b8f7fb9acc5e991c
	                    minikube.k8s.io/name=ha-735960
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_01T12_28_42_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Jul 2024 12:28:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-735960-m05
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Jul 2024 12:28:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Jul 2024 12:28:50 +0000   Mon, 01 Jul 2024 12:28:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Jul 2024 12:28:50 +0000   Mon, 01 Jul 2024 12:28:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Jul 2024 12:28:50 +0000   Mon, 01 Jul 2024 12:28:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Jul 2024 12:28:50 +0000   Mon, 01 Jul 2024 12:28:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.36
	  Hostname:    ha-735960-m05
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ac5bfa209102440dba489285dca931bd
	  System UUID:                ac5bfa20-9102-440d-ba48-9285dca931bd
	  Boot ID:                    7cf74a98-f899-47d1-9d91-60652f40aade
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-ha-735960-m05                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         26s
	  kube-system                 kindnet-c7gxg                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      29s
	  kube-system                 kube-apiserver-ha-735960-m05             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  kube-system                 kube-controller-manager-ha-735960-m05    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  kube-system                 kube-proxy-7z9kk                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29s
	  kube-system                 kube-scheduler-ha-735960-m05             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  kube-system                 kube-vip-ha-735960-m05                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 24s                kube-proxy       
	  Normal  NodeHasSufficientMemory  29s (x8 over 29s)  kubelet          Node ha-735960-m05 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s (x8 over 29s)  kubelet          Node ha-735960-m05 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s (x7 over 29s)  kubelet          Node ha-735960-m05 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           28s                node-controller  Node ha-735960-m05 event: Registered Node ha-735960-m05 in Controller
	  Normal  RegisteredNode           27s                node-controller  Node ha-735960-m05 event: Registered Node ha-735960-m05 in Controller
	  Normal  RegisteredNode           26s                node-controller  Node ha-735960-m05 event: Registered Node ha-735960-m05 in Controller
	  Normal  RegisteredNode           10s                node-controller  Node ha-735960-m05 event: Registered Node ha-735960-m05 in Controller
	
	
	==> dmesg <==
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050613] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036847] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.466422] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.742414] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.542503] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.890956] systemd-fstab-generator[479]: Ignoring "noauto" option for root device
	[  +0.054969] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.050473] systemd-fstab-generator[491]: Ignoring "noauto" option for root device
	[  +2.186564] systemd-fstab-generator[1047]: Ignoring "noauto" option for root device
	[  +0.281745] systemd-fstab-generator[1084]: Ignoring "noauto" option for root device
	[  +0.110826] systemd-fstab-generator[1096]: Ignoring "noauto" option for root device
	[  +0.123894] systemd-fstab-generator[1110]: Ignoring "noauto" option for root device
	[  +2.248144] kauditd_printk_skb: 195 callbacks suppressed
	[  +0.296890] systemd-fstab-generator[1351]: Ignoring "noauto" option for root device
	[  +0.110572] systemd-fstab-generator[1363]: Ignoring "noauto" option for root device
	[  +0.111234] systemd-fstab-generator[1375]: Ignoring "noauto" option for root device
	[  +0.128120] systemd-fstab-generator[1390]: Ignoring "noauto" option for root device
	[  +0.483978] systemd-fstab-generator[1543]: Ignoring "noauto" option for root device
	[  +6.839985] kauditd_printk_skb: 176 callbacks suppressed
	[ +10.416982] kauditd_printk_skb: 40 callbacks suppressed
	[Jul 1 12:25] kauditd_printk_skb: 30 callbacks suppressed
	[ +36.086285] kauditd_printk_skb: 48 callbacks suppressed
	
	
	==> etcd [6a200a6b4902] <==
	{"level":"info","ts":"2024-07-01T12:23:54.888482Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:54.888629Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:54.888657Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 received MsgPreVoteResp from b6c76b3131c1024 at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:54.888687Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 [logterm: 2, index: 1880] sent MsgPreVote request to 77557cf66c24e9ff at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:54.88881Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 [logterm: 2, index: 1880] sent MsgPreVote request to c77bbbee62c21090 at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:56.288805Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:56.288918Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:56.288952Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 received MsgPreVoteResp from b6c76b3131c1024 at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:56.289018Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 [logterm: 2, index: 1880] sent MsgPreVote request to 77557cf66c24e9ff at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:56.289055Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 [logterm: 2, index: 1880] sent MsgPreVote request to c77bbbee62c21090 at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:57.688608Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:57.688647Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:57.688658Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 received MsgPreVoteResp from b6c76b3131c1024 at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:57.688677Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 [logterm: 2, index: 1880] sent MsgPreVote request to 77557cf66c24e9ff at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:57.688686Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 [logterm: 2, index: 1880] sent MsgPreVote request to c77bbbee62c21090 at term 2"}
	{"level":"warn","ts":"2024-07-01T12:23:57.772826Z","caller":"etcdserver/server.go:2089","msg":"failed to publish local member to cluster through raft","local-member-id":"b6c76b3131c1024","local-member-attributes":"{Name:ha-735960 ClientURLs:[https://192.168.39.16:2379]}","request-path":"/0/members/b6c76b3131c1024/attributes","publish-timeout":"7s","error":"etcdserver: request timed out"}
	{"level":"info","ts":"2024-07-01T12:23:59.088585Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:59.088645Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:59.08866Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 received MsgPreVoteResp from b6c76b3131c1024 at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:59.088676Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 [logterm: 2, index: 1880] sent MsgPreVote request to 77557cf66c24e9ff at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:59.088691Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 [logterm: 2, index: 1880] sent MsgPreVote request to c77bbbee62c21090 at term 2"}
	{"level":"warn","ts":"2024-07-01T12:23:59.821067Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c77bbbee62c21090","rtt":"0s","error":"dial tcp 192.168.39.86:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-01T12:23:59.821149Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"c77bbbee62c21090","rtt":"0s","error":"dial tcp 192.168.39.86:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-01T12:23:59.836394Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"77557cf66c24e9ff","rtt":"0s","error":"dial tcp 192.168.39.97:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-07-01T12:23:59.837603Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"77557cf66c24e9ff","rtt":"0s","error":"dial tcp 192.168.39.97:2380: connect: no route to host"}
	
	
	==> etcd [852492f61fee] <==
	{"level":"info","ts":"2024-07-01T12:28:37.671074Z","caller":"rafthttp/peer.go:133","msg":"starting remote peer","remote-peer-id":"ee1971b4bd9110fc"}
	{"level":"info","ts":"2024-07-01T12:28:37.671397Z","caller":"rafthttp/pipeline.go:72","msg":"started HTTP pipelining with remote peer","local-member-id":"b6c76b3131c1024","remote-peer-id":"ee1971b4bd9110fc"}
	{"level":"info","ts":"2024-07-01T12:28:37.67205Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"b6c76b3131c1024","remote-peer-id":"ee1971b4bd9110fc"}
	{"level":"info","ts":"2024-07-01T12:28:37.672633Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"b6c76b3131c1024","remote-peer-id":"ee1971b4bd9110fc"}
	{"level":"info","ts":"2024-07-01T12:28:37.676276Z","caller":"rafthttp/peer.go:137","msg":"started remote peer","remote-peer-id":"ee1971b4bd9110fc"}
	{"level":"info","ts":"2024-07-01T12:28:37.676412Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b6c76b3131c1024","remote-peer-id":"ee1971b4bd9110fc"}
	{"level":"info","ts":"2024-07-01T12:28:37.676632Z","caller":"rafthttp/transport.go:317","msg":"added remote peer","local-member-id":"b6c76b3131c1024","remote-peer-id":"ee1971b4bd9110fc","remote-peer-urls":["https://192.168.39.36:2380"]}
	{"level":"info","ts":"2024-07-01T12:28:37.677132Z","caller":"etcdserver/server.go:1946","msg":"applied a configuration change through raft","local-member-id":"b6c76b3131c1024","raft-conf-change":"ConfChangeAddLearnerNode","raft-conf-change-node-id":"ee1971b4bd9110fc"}
	{"level":"info","ts":"2024-07-01T12:28:37.677162Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b6c76b3131c1024","remote-peer-id":"ee1971b4bd9110fc"}
	{"level":"warn","ts":"2024-07-01T12:28:38.253452Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"ee1971b4bd9110fc","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2024-07-01T12:28:38.431305Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.36:2380/version","remote-member-id":"ee1971b4bd9110fc","error":"Get \"https://192.168.39.36:2380/version\": dial tcp 192.168.39.36:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-01T12:28:38.43161Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"ee1971b4bd9110fc","error":"Get \"https://192.168.39.36:2380/version\": dial tcp 192.168.39.36:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-01T12:28:38.7509Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"ee1971b4bd9110fc","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2024-07-01T12:28:39.371143Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"ee1971b4bd9110fc"}
	{"level":"info","ts":"2024-07-01T12:28:39.372808Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"b6c76b3131c1024","remote-peer-id":"ee1971b4bd9110fc"}
	{"level":"info","ts":"2024-07-01T12:28:39.373278Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b6c76b3131c1024","to":"ee1971b4bd9110fc","stream-type":"stream Message"}
	{"level":"info","ts":"2024-07-01T12:28:39.373513Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"b6c76b3131c1024","remote-peer-id":"ee1971b4bd9110fc"}
	{"level":"info","ts":"2024-07-01T12:28:39.373472Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b6c76b3131c1024","remote-peer-id":"ee1971b4bd9110fc"}
	{"level":"info","ts":"2024-07-01T12:28:39.442599Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b6c76b3131c1024","to":"ee1971b4bd9110fc","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-01T12:28:39.442655Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"b6c76b3131c1024","remote-peer-id":"ee1971b4bd9110fc"}
	{"level":"warn","ts":"2024-07-01T12:28:39.740284Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"ee1971b4bd9110fc","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2024-07-01T12:28:40.240481Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"ee1971b4bd9110fc","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2024-07-01T12:28:41.251457Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 switched to configuration voters=(823163343393787940 8598916461351987711 14374289268216565904 17156869276533068028)"}
	{"level":"info","ts":"2024-07-01T12:28:41.251925Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"cad58bbf0f3daddf","local-member-id":"b6c76b3131c1024"}
	{"level":"info","ts":"2024-07-01T12:28:41.252172Z","caller":"etcdserver/server.go:1946","msg":"applied a configuration change through raft","local-member-id":"b6c76b3131c1024","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"ee1971b4bd9110fc"}
	
	
	==> kernel <==
	 12:29:06 up 5 min,  0 users,  load average: 0.24, 0.19, 0.10
	Linux ha-735960 5.10.207 #1 SMP Wed Jun 26 19:37:34 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [bf788c37e091] <==
	I0701 12:28:36.582390       1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
	I0701 12:28:36.582561       1 main.go:250] Node ha-735960-m03 has CIDR [10.244.2.0/24] 
	I0701 12:28:36.583030       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0701 12:28:36.583157       1 main.go:250] Node ha-735960-m04 has CIDR [10.244.3.0/24] 
	I0701 12:28:46.597795       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0701 12:28:46.598072       1 main.go:227] handling current node
	I0701 12:28:46.598159       1 main.go:223] Handling node with IPs: map[192.168.39.86:{}]
	I0701 12:28:46.598185       1 main.go:250] Node ha-735960-m02 has CIDR [10.244.1.0/24] 
	I0701 12:28:46.598358       1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
	I0701 12:28:46.598450       1 main.go:250] Node ha-735960-m03 has CIDR [10.244.2.0/24] 
	I0701 12:28:46.598536       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0701 12:28:46.598605       1 main.go:250] Node ha-735960-m04 has CIDR [10.244.3.0/24] 
	I0701 12:28:46.598669       1 main.go:223] Handling node with IPs: map[192.168.39.36:{}]
	I0701 12:28:46.598722       1 main.go:250] Node ha-735960-m05 has CIDR [10.244.4.0/24] 
	I0701 12:28:46.598910       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.4.0/24 Src: <nil> Gw: 192.168.39.36 Flags: [] Table: 0} 
	I0701 12:28:56.614691       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0701 12:28:56.614924       1 main.go:227] handling current node
	I0701 12:28:56.615041       1 main.go:223] Handling node with IPs: map[192.168.39.86:{}]
	I0701 12:28:56.615118       1 main.go:250] Node ha-735960-m02 has CIDR [10.244.1.0/24] 
	I0701 12:28:56.615323       1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
	I0701 12:28:56.615459       1 main.go:250] Node ha-735960-m03 has CIDR [10.244.2.0/24] 
	I0701 12:28:56.615623       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0701 12:28:56.615708       1 main.go:250] Node ha-735960-m04 has CIDR [10.244.3.0/24] 
	I0701 12:28:56.615882       1 main.go:223] Handling node with IPs: map[192.168.39.36:{}]
	I0701 12:28:56.615965       1 main.go:250] Node ha-735960-m05 has CIDR [10.244.4.0/24] 
	
	
	==> kindnet [f472aef5302f] <==
	I0701 12:20:12.428842       1 main.go:250] Node ha-735960-m04 has CIDR [10.244.3.0/24] 
	I0701 12:20:22.443154       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0701 12:20:22.443292       1 main.go:227] handling current node
	I0701 12:20:22.443323       1 main.go:223] Handling node with IPs: map[192.168.39.86:{}]
	I0701 12:20:22.443388       1 main.go:250] Node ha-735960-m02 has CIDR [10.244.1.0/24] 
	I0701 12:20:22.443605       1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
	I0701 12:20:22.443653       1 main.go:250] Node ha-735960-m03 has CIDR [10.244.2.0/24] 
	I0701 12:20:22.443793       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0701 12:20:22.443836       1 main.go:250] Node ha-735960-m04 has CIDR [10.244.3.0/24] 
	I0701 12:20:32.451395       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0701 12:20:32.451431       1 main.go:227] handling current node
	I0701 12:20:32.451481       1 main.go:223] Handling node with IPs: map[192.168.39.86:{}]
	I0701 12:20:32.451486       1 main.go:250] Node ha-735960-m02 has CIDR [10.244.1.0/24] 
	I0701 12:20:32.451947       1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
	I0701 12:20:32.451980       1 main.go:250] Node ha-735960-m03 has CIDR [10.244.2.0/24] 
	I0701 12:20:32.452873       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0701 12:20:32.453015       1 main.go:250] Node ha-735960-m04 has CIDR [10.244.3.0/24] 
	I0701 12:20:42.470169       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0701 12:20:42.470264       1 main.go:227] handling current node
	I0701 12:20:42.470289       1 main.go:223] Handling node with IPs: map[192.168.39.86:{}]
	I0701 12:20:42.470302       1 main.go:250] Node ha-735960-m02 has CIDR [10.244.1.0/24] 
	I0701 12:20:42.470523       1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
	I0701 12:20:42.470616       1 main.go:250] Node ha-735960-m03 has CIDR [10.244.2.0/24] 
	I0701 12:20:42.470868       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0701 12:20:42.470914       1 main.go:250] Node ha-735960-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [8ee3e44a43c3] <==
	I0701 12:25:11.632913       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0701 12:25:11.645811       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0701 12:25:11.645876       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0701 12:25:11.690103       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0701 12:25:11.690292       1 policy_source.go:224] refreshing policies
	I0701 12:25:11.718179       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0701 12:25:11.726917       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0701 12:25:11.729879       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0701 12:25:11.730212       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0701 12:25:11.730238       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0701 12:25:11.737552       1 shared_informer.go:320] Caches are synced for configmaps
	I0701 12:25:11.751625       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0701 12:25:11.752269       1 aggregator.go:165] initial CRD sync complete...
	I0701 12:25:11.752312       1 autoregister_controller.go:141] Starting autoregister controller
	I0701 12:25:11.752319       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0701 12:25:11.752325       1 cache.go:39] Caches are synced for autoregister controller
	I0701 12:25:11.756015       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0701 12:25:11.757180       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0701 12:25:11.779526       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	W0701 12:25:11.807352       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.86]
	I0701 12:25:11.811699       1 controller.go:615] quota admission added evaluator for: endpoints
	I0701 12:25:11.839496       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0701 12:25:11.843047       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0701 12:25:12.631101       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0701 12:25:13.074615       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.16 192.168.39.86]
	
	
	==> kube-apiserver [a3cb59ee8d57] <==
	I0701 12:24:33.660467       1 options.go:221] external host was not specified, using 192.168.39.16
	I0701 12:24:33.670142       1 server.go:148] Version: v1.30.2
	I0701 12:24:33.670491       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0701 12:24:34.296638       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0701 12:24:34.308879       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0701 12:24:34.324179       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0701 12:24:34.324219       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0701 12:24:34.326894       1 instance.go:299] Using reconciler: lease
	W0701 12:24:54.288105       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0701 12:24:54.289911       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0701 12:24:54.328399       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [67dc946c8c45] <==
	I0701 12:25:24.710493       1 shared_informer.go:320] Caches are synced for stateful set
	I0701 12:25:24.741914       1 shared_informer.go:320] Caches are synced for resource quota
	I0701 12:25:24.771129       1 shared_informer.go:320] Caches are synced for disruption
	I0701 12:25:24.825005       1 shared_informer.go:320] Caches are synced for persistent volume
	I0701 12:25:25.061636       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.968119ms"
	I0701 12:25:25.061928       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.671µs"
	I0701 12:25:25.231337       1 shared_informer.go:320] Caches are synced for garbage collector
	I0701 12:25:25.278015       1 shared_informer.go:320] Caches are synced for garbage collector
	I0701 12:25:25.278079       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0701 12:25:53.073870       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-735960-m04"
	I0701 12:25:53.162214       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.543735ms"
	I0701 12:25:53.163381       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="162.337µs"
	I0701 12:25:59.557437       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="56.6658ms"
	I0701 12:25:59.558362       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="80.196µs"
	I0701 12:25:59.565576       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-s49dr EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-s49dr\": the object has been modified; please apply your changes to the latest version and try again"
	I0701 12:25:59.566070       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"673ce502-ab01-47a0-ad3e-c33bd402b496", APIVersion:"v1", ResourceVersion:"234", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-s49dr EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-s49dr": the object has been modified; please apply your changes to the latest version and try again
	I0701 12:26:43.750974       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="174.579µs"
	I0701 12:26:47.044231       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.968469ms"
	I0701 12:26:47.047107       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="77.336µs"
	I0701 12:27:30.083176       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-735960-m04"
	I0701 12:28:37.391320       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-735960-m04"
	I0701 12:28:37.393588       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-735960-m05\" does not exist"
	I0701 12:28:37.409892       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-735960-m05" podCIDRs=["10.244.4.0/24"]
	I0701 12:28:39.645666       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-735960-m05"
	I0701 12:28:50.194673       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-735960-m04"
	
	
	==> kube-controller-manager [ec2c061093f1] <==
	I0701 12:24:33.938262       1 serving.go:380] Generated self-signed cert in-memory
	I0701 12:24:34.667463       1 controllermanager.go:189] "Starting" version="v1.30.2"
	I0701 12:24:34.667501       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0701 12:24:34.670076       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0701 12:24:34.670322       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0701 12:24:34.670888       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0701 12:24:34.671075       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	E0701 12:24:55.336106       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.16:8443/healthz\": dial tcp 192.168.39.16:8443: connect: connection refused"
	
	
	==> kube-proxy [6116abe6039d] <==
	I0701 12:16:09.205590       1 server_linux.go:69] "Using iptables proxy"
	I0701 12:16:09.223098       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.16"]
	I0701 12:16:09.284088       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0701 12:16:09.284134       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0701 12:16:09.284152       1 server_linux.go:165] "Using iptables Proxier"
	I0701 12:16:09.286802       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0701 12:16:09.287240       1 server.go:872] "Version info" version="v1.30.2"
	I0701 12:16:09.287274       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0701 12:16:09.288803       1 config.go:192] "Starting service config controller"
	I0701 12:16:09.288830       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0701 12:16:09.289262       1 config.go:101] "Starting endpoint slice config controller"
	I0701 12:16:09.289283       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0701 12:16:09.290101       1 config.go:319] "Starting node config controller"
	I0701 12:16:09.290125       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0701 12:16:09.389941       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0701 12:16:09.390030       1 shared_informer.go:320] Caches are synced for service config
	I0701 12:16:09.390393       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [710f5c3a9f85] <==
	I0701 12:25:23.858069       1 server_linux.go:69] "Using iptables proxy"
	I0701 12:25:23.875125       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.16"]
	I0701 12:25:23.958416       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0701 12:25:23.958505       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0701 12:25:23.958526       1 server_linux.go:165] "Using iptables Proxier"
	I0701 12:25:23.963079       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0701 12:25:23.963683       1 server.go:872] "Version info" version="v1.30.2"
	I0701 12:25:23.963707       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0701 12:25:23.967807       1 config.go:192] "Starting service config controller"
	I0701 12:25:23.968544       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0701 12:25:23.968625       1 config.go:101] "Starting endpoint slice config controller"
	I0701 12:25:23.968632       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0701 12:25:23.972994       1 config.go:319] "Starting node config controller"
	I0701 12:25:23.973007       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0701 12:25:24.069380       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0701 12:25:24.069565       1 shared_informer.go:320] Caches are synced for service config
	I0701 12:25:24.073577       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2d71437c5f06] <==
	Trace[1766396451]: [10.001227292s] [10.001227292s] END
	E0701 12:23:38.923742       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.16:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	W0701 12:23:40.712171       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.16:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	E0701 12:23:40.712228       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.16:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	W0701 12:23:40.847258       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.16:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:35008->192.168.39.16:8443: read: connection reset by peer
	I0701 12:23:40.847402       1 trace.go:236] Trace[2065780204]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (01-Jul-2024 12:23:30.463) (total time: 10384ms):
	Trace[2065780204]: ---"Objects listed" error:Get "https://192.168.39.16:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:35008->192.168.39.16:8443: read: connection reset by peer 10384ms (12:23:40.847)
	Trace[2065780204]: [10.384136255s] [10.384136255s] END
	E0701 12:23:40.847432       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.16:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:35008->192.168.39.16:8443: read: connection reset by peer
	W0701 12:23:40.847437       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.16:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:35050->192.168.39.16:8443: read: connection reset by peer
	W0701 12:23:40.847259       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.16:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:35028->192.168.39.16:8443: read: connection reset by peer
	E0701 12:23:40.847495       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.16:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:35050->192.168.39.16:8443: read: connection reset by peer
	E0701 12:23:40.847499       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.16:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:35028->192.168.39.16:8443: read: connection reset by peer
	W0701 12:23:40.847682       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.16:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:35066->192.168.39.16:8443: read: connection reset by peer
	E0701 12:23:40.847714       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.16:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:35066->192.168.39.16:8443: read: connection reset by peer
	W0701 12:23:40.848299       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.16:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:35034->192.168.39.16:8443: read: connection reset by peer
	E0701 12:23:40.848357       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.16:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:35034->192.168.39.16:8443: read: connection reset by peer
	W0701 12:23:51.660283       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.16:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	E0701 12:23:51.660337       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.16:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	W0701 12:23:54.252191       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.16:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	E0701 12:23:54.252565       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.16:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	W0701 12:23:55.679907       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.16:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	E0701 12:23:55.680228       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.16:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	E0701 12:24:00.290141       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	E0701 12:24:00.290379       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [693eb0b8f5d7] <==
	W0701 12:25:05.563752       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.16:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	E0701 12:25:05.563793       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.16:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	W0701 12:25:05.636901       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.16:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	E0701 12:25:05.637119       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.16:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	W0701 12:25:11.653758       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0701 12:25:11.654470       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0701 12:25:11.654763       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0701 12:25:11.655634       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0701 12:25:11.655894       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0701 12:25:11.655933       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0701 12:25:11.659133       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0701 12:25:11.659348       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0701 12:25:13.850760       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0701 12:28:37.499217       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-qfj9k\": pod kube-proxy-qfj9k is already assigned to node \"ha-735960-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-qfj9k" node="ha-735960-m05"
	E0701 12:28:37.497306       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-tbjq4\": pod kindnet-tbjq4 is already assigned to node \"ha-735960-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-tbjq4" node="ha-735960-m05"
	E0701 12:28:37.502534       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod c7cd6384-ae4d-47ce-b880-302cf834667f(kube-system/kindnet-tbjq4) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-tbjq4"
	E0701 12:28:37.502801       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-tbjq4\": pod kindnet-tbjq4 is already assigned to node \"ha-735960-m05\"" pod="kube-system/kindnet-tbjq4"
	I0701 12:28:37.502972       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-tbjq4" node="ha-735960-m05"
	E0701 12:28:37.503947       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 9cfb7903-e04a-4cdf-b39a-11e890622831(kube-system/kube-proxy-qfj9k) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-qfj9k"
	E0701 12:28:37.503993       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-qfj9k\": pod kube-proxy-qfj9k is already assigned to node \"ha-735960-m05\"" pod="kube-system/kube-proxy-qfj9k"
	I0701 12:28:37.504262       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-qfj9k" node="ha-735960-m05"
	E0701 12:28:37.500193       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-k4d6m\": pod kindnet-k4d6m is already assigned to node \"ha-735960-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-k4d6m" node="ha-735960-m05"
	E0701 12:28:37.505096       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod ccf70e82-9d7c-4c5f-ad9f-d02861ea0794(kube-system/kindnet-k4d6m) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-k4d6m"
	E0701 12:28:37.510144       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-k4d6m\": pod kindnet-k4d6m is already assigned to node \"ha-735960-m05\"" pod="kube-system/kindnet-k4d6m"
	I0701 12:28:37.510199       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-k4d6m" node="ha-735960-m05"
	
	
	==> kubelet <==
	Jul 01 12:25:24 ha-735960 kubelet[1550]: I0701 12:25:24.225255    1550 scope.go:117] "RemoveContainer" containerID="1ef6d9da6a9c5d6e77ef8d42735bdba288502d231394d299243bc1b669822d1c"
	Jul 01 12:25:25 ha-735960 kubelet[1550]: I0701 12:25:25.225212    1550 scope.go:117] "RemoveContainer" containerID="f472aef5302fd01233da1bd769162654c0b238cb1a3b0c9b24deef221c4821a3"
	Jul 01 12:25:26 ha-735960 kubelet[1550]: I0701 12:25:26.229286    1550 scope.go:117] "RemoveContainer" containerID="97d58c94f3fdcc84b84c3c46e6b25f8e7da118d5c9cd53058ae127fe580a40a7"
	Jul 01 12:25:26 ha-735960 kubelet[1550]: E0701 12:25:26.319340    1550 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 01 12:25:26 ha-735960 kubelet[1550]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 01 12:25:26 ha-735960 kubelet[1550]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 01 12:25:26 ha-735960 kubelet[1550]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 01 12:25:26 ha-735960 kubelet[1550]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 01 12:25:26 ha-735960 kubelet[1550]: I0701 12:25:26.443283    1550 scope.go:117] "RemoveContainer" containerID="14112a4d8f2cb5cfea8813c52de120eeef6fe681ebf589fd8708d1557c35b85f"
	Jul 01 12:25:26 ha-735960 kubelet[1550]: I0701 12:25:26.480472    1550 scope.go:117] "RemoveContainer" containerID="97d58c94f3fdcc84b84c3c46e6b25f8e7da118d5c9cd53058ae127fe580a40a7"
	Jul 01 12:26:26 ha-735960 kubelet[1550]: E0701 12:26:26.244909    1550 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 01 12:26:26 ha-735960 kubelet[1550]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 01 12:26:26 ha-735960 kubelet[1550]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 01 12:26:26 ha-735960 kubelet[1550]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 01 12:26:26 ha-735960 kubelet[1550]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 01 12:27:26 ha-735960 kubelet[1550]: E0701 12:27:26.245316    1550 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 01 12:27:26 ha-735960 kubelet[1550]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 01 12:27:26 ha-735960 kubelet[1550]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 01 12:27:26 ha-735960 kubelet[1550]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 01 12:27:26 ha-735960 kubelet[1550]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 01 12:28:26 ha-735960 kubelet[1550]: E0701 12:28:26.245797    1550 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 01 12:28:26 ha-735960 kubelet[1550]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 01 12:28:26 ha-735960 kubelet[1550]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 01 12:28:26 ha-735960 kubelet[1550]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 01 12:28:26 ha-735960 kubelet[1550]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-735960 -n ha-735960
helpers_test.go:261: (dbg) Run:  kubectl --context ha-735960 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (84.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (3.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:304: expected profile "ha-735960" in json of 'profile list' to include 4 nodes but have 5 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-735960\",\"Status\":\"HAppy\",\"Config\":{\"Name\":\"ha-735960\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\
":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-735960\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.16\",\"Port\":8443,\"KubernetesVersion\":\
"v1.30.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.86\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.97\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.60\",\"Port\":0,\"KubernetesVersion\":\"v1.30.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true},{\"Name\":\"m05\",\"IP\":\"192.168.39.36\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":fa
lse,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\
":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-735960 -n ha-735960
helpers_test.go:244: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-735960 logs -n 25: (1.638538002s)
helpers_test.go:252: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-735960 ssh -n                                                                 | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n ha-735960-m04 sudo cat                                          | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | /home/docker/cp-test_ha-735960-m03_ha-735960-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-735960 cp testdata/cp-test.txt                                                | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n                                                                 | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-735960 cp ha-735960-m04:/home/docker/cp-test.txt                              | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2826819896/001/cp-test_ha-735960-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n                                                                 | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-735960 cp ha-735960-m04:/home/docker/cp-test.txt                              | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960:/home/docker/cp-test_ha-735960-m04_ha-735960.txt                       |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n                                                                 | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n ha-735960 sudo cat                                              | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | /home/docker/cp-test_ha-735960-m04_ha-735960.txt                                 |           |         |         |                     |                     |
	| cp      | ha-735960 cp ha-735960-m04:/home/docker/cp-test.txt                              | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m02:/home/docker/cp-test_ha-735960-m04_ha-735960-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n                                                                 | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n ha-735960-m02 sudo cat                                          | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | /home/docker/cp-test_ha-735960-m04_ha-735960-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-735960 cp ha-735960-m04:/home/docker/cp-test.txt                              | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m03:/home/docker/cp-test_ha-735960-m04_ha-735960-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n                                                                 | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | ha-735960-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-735960 ssh -n ha-735960-m03 sudo cat                                          | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | /home/docker/cp-test_ha-735960-m04_ha-735960-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-735960 node stop m02 -v=7                                                     | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-735960 node start m02 -v=7                                                    | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:20 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-735960 -v=7                                                           | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:20 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-735960 -v=7                                                                | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:20 UTC | 01 Jul 24 12:21 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-735960 --wait=true -v=7                                                    | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:21 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-735960                                                                | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:22 UTC |                     |
	| node    | ha-735960 node delete m03 -v=7                                                   | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:22 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-735960 stop -v=7                                                              | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:23 UTC | 01 Jul 24 12:24 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-735960 --wait=true                                                         | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:24 UTC | 01 Jul 24 12:27 UTC |
	|         | -v=7 --alsologtostderr                                                           |           |         |         |                     |                     |
	|         | --driver=kvm2                                                                    |           |         |         |                     |                     |
	| node    | add -p ha-735960                                                                 | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:27 UTC | 01 Jul 24 12:29 UTC |
	|         | --control-plane -v=7                                                             |           |         |         |                     |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/01 12:24:02
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0701 12:24:02.565321  653531 out.go:291] Setting OutFile to fd 1 ...
	I0701 12:24:02.565576  653531 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 12:24:02.565584  653531 out.go:304] Setting ErrFile to fd 2...
	I0701 12:24:02.565588  653531 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 12:24:02.565782  653531 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19166-630650/.minikube/bin
	I0701 12:24:02.566304  653531 out.go:298] Setting JSON to false
	I0701 12:24:02.567248  653531 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":7581,"bootTime":1719829062,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0701 12:24:02.567318  653531 start.go:139] virtualization: kvm guest
	I0701 12:24:02.569903  653531 out.go:177] * [ha-735960] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0701 12:24:02.571307  653531 notify.go:220] Checking for updates...
	I0701 12:24:02.571336  653531 out.go:177]   - MINIKUBE_LOCATION=19166
	I0701 12:24:02.572748  653531 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 12:24:02.574111  653531 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19166-630650/kubeconfig
	I0701 12:24:02.575333  653531 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19166-630650/.minikube
	I0701 12:24:02.576670  653531 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0701 12:24:02.578040  653531 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 12:24:02.579691  653531 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:24:02.580063  653531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:24:02.580118  653531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:24:02.595084  653531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46077
	I0701 12:24:02.595523  653531 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:24:02.596065  653531 main.go:141] libmachine: Using API Version  1
	I0701 12:24:02.596090  653531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:24:02.596376  653531 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:24:02.596591  653531 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:24:02.596798  653531 driver.go:392] Setting default libvirt URI to qemu:///system
	I0701 12:24:02.597091  653531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:24:02.597140  653531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:24:02.611685  653531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43891
	I0701 12:24:02.612062  653531 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:24:02.612574  653531 main.go:141] libmachine: Using API Version  1
	I0701 12:24:02.612596  653531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:24:02.612886  653531 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:24:02.613060  653531 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:24:02.647232  653531 out.go:177] * Using the kvm2 driver based on existing profile
	I0701 12:24:02.648606  653531 start.go:297] selected driver: kvm2
	I0701 12:24:02.648624  653531 start.go:901] validating driver "kvm2" against &{Name:ha-735960 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.2 ClusterName:ha-735960 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.60 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storagecla
ss:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 12:24:02.648774  653531 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 12:24:02.649109  653531 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:24:02.649176  653531 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19166-630650/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0701 12:24:02.663726  653531 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0701 12:24:02.664362  653531 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 12:24:02.664394  653531 cni.go:84] Creating CNI manager for ""
	I0701 12:24:02.664400  653531 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0701 12:24:02.664456  653531 start.go:340] cluster config:
	{Name:ha-735960 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-735960 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.60 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false h
elm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 12:24:02.664569  653531 iso.go:125] acquiring lock: {Name:mk5c70910f61bc270c83609c48670eaf9d7e0602 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:24:02.666644  653531 out.go:177] * Starting "ha-735960" primary control-plane node in "ha-735960" cluster
	I0701 12:24:02.667913  653531 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 12:24:02.667956  653531 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19166-630650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0701 12:24:02.667963  653531 cache.go:56] Caching tarball of preloaded images
	I0701 12:24:02.668051  653531 preload.go:173] Found /home/jenkins/minikube-integration/19166-630650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0701 12:24:02.668065  653531 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0701 12:24:02.668178  653531 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
	I0701 12:24:02.668362  653531 start.go:360] acquireMachinesLock for ha-735960: {Name:mk43a6c0c0c15a237623df377ad65b82186c0ad8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:24:02.668420  653531 start.go:364] duration metric: took 37.459µs to acquireMachinesLock for "ha-735960"
	I0701 12:24:02.668440  653531 start.go:96] Skipping create...Using existing machine configuration
	I0701 12:24:02.668454  653531 fix.go:54] fixHost starting: 
	I0701 12:24:02.668711  653531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:24:02.668747  653531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:24:02.682861  653531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39713
	I0701 12:24:02.683321  653531 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:24:02.683791  653531 main.go:141] libmachine: Using API Version  1
	I0701 12:24:02.683812  653531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:24:02.684145  653531 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:24:02.684389  653531 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:24:02.684573  653531 main.go:141] libmachine: (ha-735960) Calling .GetState
	I0701 12:24:02.686019  653531 fix.go:112] recreateIfNeeded on ha-735960: state=Stopped err=<nil>
	I0701 12:24:02.686043  653531 main.go:141] libmachine: (ha-735960) Calling .DriverName
	W0701 12:24:02.686187  653531 fix.go:138] unexpected machine state, will restart: <nil>
	I0701 12:24:02.688339  653531 out.go:177] * Restarting existing kvm2 VM for "ha-735960" ...
	I0701 12:24:02.690004  653531 main.go:141] libmachine: (ha-735960) Calling .Start
	I0701 12:24:02.690210  653531 main.go:141] libmachine: (ha-735960) Ensuring networks are active...
	I0701 12:24:02.690928  653531 main.go:141] libmachine: (ha-735960) Ensuring network default is active
	I0701 12:24:02.691237  653531 main.go:141] libmachine: (ha-735960) Ensuring network mk-ha-735960 is active
	I0701 12:24:02.691618  653531 main.go:141] libmachine: (ha-735960) Getting domain xml...
	I0701 12:24:02.692321  653531 main.go:141] libmachine: (ha-735960) Creating domain...
	I0701 12:24:03.888996  653531 main.go:141] libmachine: (ha-735960) Waiting to get IP...
	I0701 12:24:03.889967  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:03.890480  653531 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:24:03.890588  653531 main.go:141] libmachine: (ha-735960) DBG | I0701 12:24:03.890454  653582 retry.go:31] will retry after 276.532377ms: waiting for machine to come up
	I0701 12:24:04.169193  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:04.169696  653531 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:24:04.169722  653531 main.go:141] libmachine: (ha-735960) DBG | I0701 12:24:04.169655  653582 retry.go:31] will retry after 379.701447ms: waiting for machine to come up
	I0701 12:24:04.551325  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:04.551741  653531 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:24:04.551768  653531 main.go:141] libmachine: (ha-735960) DBG | I0701 12:24:04.551690  653582 retry.go:31] will retry after 390.796114ms: waiting for machine to come up
	I0701 12:24:04.944503  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:04.944879  653531 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:24:04.944907  653531 main.go:141] libmachine: (ha-735960) DBG | I0701 12:24:04.944824  653582 retry.go:31] will retry after 501.242083ms: waiting for machine to come up
	I0701 12:24:05.447754  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:05.448283  653531 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:24:05.448315  653531 main.go:141] libmachine: (ha-735960) DBG | I0701 12:24:05.448261  653582 retry.go:31] will retry after 739.761709ms: waiting for machine to come up
	I0701 12:24:06.189145  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:06.189602  653531 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:24:06.189631  653531 main.go:141] libmachine: (ha-735960) DBG | I0701 12:24:06.189545  653582 retry.go:31] will retry after 652.97975ms: waiting for machine to come up
	I0701 12:24:06.844427  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:06.844894  653531 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:24:06.844917  653531 main.go:141] libmachine: (ha-735960) DBG | I0701 12:24:06.844845  653582 retry.go:31] will retry after 1.122975762s: waiting for machine to come up
	I0701 12:24:07.969893  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:07.970374  653531 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:24:07.970427  653531 main.go:141] libmachine: (ha-735960) DBG | I0701 12:24:07.970304  653582 retry.go:31] will retry after 933.604302ms: waiting for machine to come up
	I0701 12:24:08.905636  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:08.905959  653531 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:24:08.905983  653531 main.go:141] libmachine: (ha-735960) DBG | I0701 12:24:08.905909  653582 retry.go:31] will retry after 1.753153445s: waiting for machine to come up
	I0701 12:24:10.662098  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:10.662553  653531 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:24:10.662622  653531 main.go:141] libmachine: (ha-735960) DBG | I0701 12:24:10.662537  653582 retry.go:31] will retry after 1.625060377s: waiting for machine to come up
	I0701 12:24:12.290368  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:12.290788  653531 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:24:12.290822  653531 main.go:141] libmachine: (ha-735960) DBG | I0701 12:24:12.290695  653582 retry.go:31] will retry after 2.741972388s: waiting for machine to come up
	I0701 12:24:15.036161  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:15.036634  653531 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
	I0701 12:24:15.036661  653531 main.go:141] libmachine: (ha-735960) DBG | I0701 12:24:15.036581  653582 retry.go:31] will retry after 3.113034425s: waiting for machine to come up
	I0701 12:24:18.151534  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:18.152048  653531 main.go:141] libmachine: (ha-735960) Found IP for machine: 192.168.39.16
	I0701 12:24:18.152074  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has current primary IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:18.152083  653531 main.go:141] libmachine: (ha-735960) Reserving static IP address...
	I0701 12:24:18.152579  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "ha-735960", mac: "52:54:00:6c:20:7c", ip: "192.168.39.16"} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:18.152611  653531 main.go:141] libmachine: (ha-735960) DBG | skip adding static IP to network mk-ha-735960 - found existing host DHCP lease matching {name: "ha-735960", mac: "52:54:00:6c:20:7c", ip: "192.168.39.16"}
	I0701 12:24:18.152626  653531 main.go:141] libmachine: (ha-735960) Reserved static IP address: 192.168.39.16
	I0701 12:24:18.152643  653531 main.go:141] libmachine: (ha-735960) Waiting for SSH to be available...
	I0701 12:24:18.152674  653531 main.go:141] libmachine: (ha-735960) DBG | Getting to WaitForSSH function...
	I0701 12:24:18.154511  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:18.154741  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:18.154760  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:18.154885  653531 main.go:141] libmachine: (ha-735960) DBG | Using SSH client type: external
	I0701 12:24:18.154912  653531 main.go:141] libmachine: (ha-735960) DBG | Using SSH private key: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa (-rw-------)
	I0701 12:24:18.154954  653531 main.go:141] libmachine: (ha-735960) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.16 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0701 12:24:18.154968  653531 main.go:141] libmachine: (ha-735960) DBG | About to run SSH command:
	I0701 12:24:18.154991  653531 main.go:141] libmachine: (ha-735960) DBG | exit 0
	I0701 12:24:18.274220  653531 main.go:141] libmachine: (ha-735960) DBG | SSH cmd err, output: <nil>: 
	I0701 12:24:18.274677  653531 main.go:141] libmachine: (ha-735960) Calling .GetConfigRaw
	I0701 12:24:18.275344  653531 main.go:141] libmachine: (ha-735960) Calling .GetIP
	I0701 12:24:18.277628  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:18.278085  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:18.278118  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:18.278447  653531 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
	I0701 12:24:18.278671  653531 machine.go:94] provisionDockerMachine start ...
	I0701 12:24:18.278694  653531 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:24:18.278956  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:24:18.281138  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:18.281565  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:18.281590  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:18.281697  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:24:18.281884  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:18.282084  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:18.282290  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:24:18.282484  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:24:18.282777  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0701 12:24:18.282790  653531 main.go:141] libmachine: About to run SSH command:
	hostname
	I0701 12:24:18.378249  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0701 12:24:18.378279  653531 main.go:141] libmachine: (ha-735960) Calling .GetMachineName
	I0701 12:24:18.378583  653531 buildroot.go:166] provisioning hostname "ha-735960"
	I0701 12:24:18.378614  653531 main.go:141] libmachine: (ha-735960) Calling .GetMachineName
	I0701 12:24:18.378869  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:24:18.381421  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:18.381789  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:18.381817  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:18.381949  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:24:18.382158  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:18.382297  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:18.382445  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:24:18.382576  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:24:18.382763  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0701 12:24:18.382780  653531 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-735960 && echo "ha-735960" | sudo tee /etc/hostname
	I0701 12:24:18.491369  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-735960
	
	I0701 12:24:18.491396  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:24:18.494039  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:18.494432  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:18.494460  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:18.494718  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:24:18.494939  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:18.495106  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:18.495259  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:24:18.495452  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:24:18.495675  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0701 12:24:18.495699  653531 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-735960' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-735960/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-735960' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0701 12:24:18.598595  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0701 12:24:18.598631  653531 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19166-630650/.minikube CaCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19166-630650/.minikube}
	I0701 12:24:18.598653  653531 buildroot.go:174] setting up certificates
	I0701 12:24:18.598662  653531 provision.go:84] configureAuth start
	I0701 12:24:18.598670  653531 main.go:141] libmachine: (ha-735960) Calling .GetMachineName
	I0701 12:24:18.598968  653531 main.go:141] libmachine: (ha-735960) Calling .GetIP
	I0701 12:24:18.601563  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:18.602005  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:18.602036  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:18.602215  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:24:18.604739  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:18.605246  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:18.605273  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:18.605427  653531 provision.go:143] copyHostCerts
	I0701 12:24:18.605458  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem
	I0701 12:24:18.605515  653531 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem, removing ...
	I0701 12:24:18.605523  653531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem
	I0701 12:24:18.605588  653531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem (1675 bytes)
	I0701 12:24:18.605671  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem
	I0701 12:24:18.605688  653531 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem, removing ...
	I0701 12:24:18.605695  653531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem
	I0701 12:24:18.605718  653531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem (1078 bytes)
	I0701 12:24:18.605772  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem
	I0701 12:24:18.605788  653531 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem, removing ...
	I0701 12:24:18.605794  653531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem
	I0701 12:24:18.605814  653531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem (1123 bytes)
	I0701 12:24:18.605871  653531 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem org=jenkins.ha-735960 san=[127.0.0.1 192.168.39.16 ha-735960 localhost minikube]
	I0701 12:24:19.079576  653531 provision.go:177] copyRemoteCerts
	I0701 12:24:19.079661  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0701 12:24:19.079696  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:24:19.082253  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:19.082610  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:19.082638  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:19.082786  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:24:19.082996  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:19.083179  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:24:19.083325  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa Username:docker}
	I0701 12:24:19.160543  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0701 12:24:19.160634  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0701 12:24:19.183871  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0701 12:24:19.183957  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0701 12:24:19.206811  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0701 12:24:19.206911  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0701 12:24:19.229160  653531 provision.go:87] duration metric: took 630.48062ms to configureAuth
	I0701 12:24:19.229197  653531 buildroot.go:189] setting minikube options for container-runtime
	I0701 12:24:19.229480  653531 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:24:19.229521  653531 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:24:19.229827  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:24:19.232595  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:19.233032  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:19.233062  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:19.233264  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:24:19.233514  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:19.233696  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:19.233834  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:24:19.234025  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:24:19.234222  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0701 12:24:19.234237  653531 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0701 12:24:19.331417  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0701 12:24:19.331446  653531 buildroot.go:70] root file system type: tmpfs
	I0701 12:24:19.331582  653531 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0701 12:24:19.331605  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:24:19.334269  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:19.334634  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:19.334660  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:19.334900  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:24:19.335107  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:19.335308  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:19.335479  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:24:19.335645  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:24:19.335809  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0701 12:24:19.335865  653531 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0701 12:24:19.443562  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0701 12:24:19.443592  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:24:19.446176  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:19.446524  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:19.446556  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:19.446723  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:24:19.446930  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:19.447105  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:19.447245  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:24:19.447408  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:24:19.447591  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0701 12:24:19.447611  653531 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0701 12:24:21.232310  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0701 12:24:21.232343  653531 machine.go:97] duration metric: took 2.953656212s to provisionDockerMachine
	I0701 12:24:21.232359  653531 start.go:293] postStartSetup for "ha-735960" (driver="kvm2")
	I0701 12:24:21.232371  653531 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0701 12:24:21.232390  653531 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:24:21.232744  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0701 12:24:21.232777  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:24:21.235119  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:21.235559  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:21.235584  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:21.235772  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:24:21.235940  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:21.236122  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:24:21.236248  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa Username:docker}
	I0701 12:24:21.313134  653531 ssh_runner.go:195] Run: cat /etc/os-release
	I0701 12:24:21.317084  653531 info.go:137] Remote host: Buildroot 2023.02.9
	I0701 12:24:21.317118  653531 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-630650/.minikube/addons for local assets ...
	I0701 12:24:21.317202  653531 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-630650/.minikube/files for local assets ...
	I0701 12:24:21.317295  653531 filesync.go:149] local asset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> 6378542.pem in /etc/ssl/certs
	I0701 12:24:21.317307  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> /etc/ssl/certs/6378542.pem
	I0701 12:24:21.317399  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0701 12:24:21.326681  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem --> /etc/ssl/certs/6378542.pem (1708 bytes)
	I0701 12:24:21.349306  653531 start.go:296] duration metric: took 116.926386ms for postStartSetup
	I0701 12:24:21.349360  653531 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:24:21.349703  653531 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0701 12:24:21.349739  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:24:21.352499  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:21.352917  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:21.352946  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:21.353148  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:24:21.353394  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:21.353561  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:24:21.353790  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa Username:docker}
	I0701 12:24:21.433784  653531 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0701 12:24:21.433859  653531 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0701 12:24:21.475659  653531 fix.go:56] duration metric: took 18.807194904s for fixHost
	I0701 12:24:21.475706  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:24:21.478623  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:21.479038  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:21.479071  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:21.479250  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:24:21.479467  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:21.479584  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:21.479702  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:24:21.479838  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:24:21.480034  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0701 12:24:21.480048  653531 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0701 12:24:21.586741  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719836661.563256683
	
	I0701 12:24:21.586770  653531 fix.go:216] guest clock: 1719836661.563256683
	I0701 12:24:21.586783  653531 fix.go:229] Guest: 2024-07-01 12:24:21.563256683 +0000 UTC Remote: 2024-07-01 12:24:21.475685785 +0000 UTC m=+18.945537438 (delta=87.570898ms)
	I0701 12:24:21.586836  653531 fix.go:200] guest clock delta is within tolerance: 87.570898ms
	I0701 12:24:21.586844  653531 start.go:83] releasing machines lock for "ha-735960", held for 18.918411663s
	I0701 12:24:21.586868  653531 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:24:21.587158  653531 main.go:141] libmachine: (ha-735960) Calling .GetIP
	I0701 12:24:21.589666  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:21.590034  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:21.590064  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:21.590216  653531 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:24:21.590761  653531 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:24:21.590954  653531 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:24:21.591048  653531 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0701 12:24:21.591096  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:24:21.591207  653531 ssh_runner.go:195] Run: cat /version.json
	I0701 12:24:21.591235  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:24:21.593711  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:21.593857  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:21.594066  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:21.594091  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:21.594278  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:24:21.594408  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:21.594432  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:21.594491  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:21.594596  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:24:21.594674  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:24:21.594780  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:21.594865  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa Username:docker}
	I0701 12:24:21.594903  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:24:21.595018  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa Username:docker}
	I0701 12:24:21.688196  653531 ssh_runner.go:195] Run: systemctl --version
	I0701 12:24:21.693743  653531 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0701 12:24:21.698823  653531 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0701 12:24:21.698901  653531 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0701 12:24:21.714364  653531 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0701 12:24:21.714404  653531 start.go:494] detecting cgroup driver to use...
	I0701 12:24:21.714572  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 12:24:21.734692  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0701 12:24:21.744599  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0701 12:24:21.754591  653531 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0701 12:24:21.754664  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0701 12:24:21.764718  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0701 12:24:21.774564  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0701 12:24:21.784516  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0701 12:24:21.794592  653531 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0701 12:24:21.804646  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0701 12:24:21.814497  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0701 12:24:21.824363  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0701 12:24:21.834566  653531 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0701 12:24:21.843852  653531 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0701 12:24:21.852939  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:24:21.959107  653531 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0701 12:24:21.981473  653531 start.go:494] detecting cgroup driver to use...
	I0701 12:24:21.981556  653531 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0701 12:24:21.995383  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 12:24:22.009843  653531 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0701 12:24:22.030755  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 12:24:22.043208  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 12:24:22.055774  653531 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0701 12:24:22.080888  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 12:24:22.093331  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 12:24:22.110088  653531 ssh_runner.go:195] Run: which cri-dockerd
	I0701 12:24:22.113487  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0701 12:24:22.121907  653531 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0701 12:24:22.137227  653531 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0701 12:24:22.245438  653531 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0701 12:24:22.351994  653531 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0701 12:24:22.352150  653531 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0701 12:24:22.368109  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:24:22.474388  653531 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0701 12:24:24.887396  653531 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.412956412s)
	I0701 12:24:24.887487  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0701 12:24:24.900113  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0701 12:24:24.912702  653531 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0701 12:24:25.020545  653531 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0701 12:24:25.134056  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:24:25.242294  653531 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0701 12:24:25.258251  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0701 12:24:25.270762  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:24:25.375199  653531 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0701 12:24:25.454939  653531 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0701 12:24:25.455020  653531 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0701 12:24:25.460209  653531 start.go:562] Will wait 60s for crictl version
	I0701 12:24:25.460266  653531 ssh_runner.go:195] Run: which crictl
	I0701 12:24:25.463721  653531 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0701 12:24:25.498358  653531 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.1
	RuntimeApiVersion:  v1
	I0701 12:24:25.498453  653531 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0701 12:24:25.525766  653531 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0701 12:24:25.549708  653531 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.1 ...
	I0701 12:24:25.549757  653531 main.go:141] libmachine: (ha-735960) Calling .GetIP
	I0701 12:24:25.552699  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:25.553097  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:25.553132  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:25.553374  653531 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0701 12:24:25.557331  653531 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 12:24:25.569653  653531 kubeadm.go:877] updating cluster {Name:ha-735960 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cl
usterName:ha-735960 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.60 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:fa
lse freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0701 12:24:25.569810  653531 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 12:24:25.569866  653531 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0701 12:24:25.593428  653531 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	kindest/kindnetd:v20240513-cd2ac642
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0701 12:24:25.593450  653531 docker.go:615] Images already preloaded, skipping extraction
	I0701 12:24:25.593535  653531 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0701 12:24:25.613507  653531 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	kindest/kindnetd:v20240513-cd2ac642
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0701 12:24:25.613542  653531 cache_images.go:84] Images are preloaded, skipping loading
	I0701 12:24:25.613557  653531 kubeadm.go:928] updating node { 192.168.39.16 8443 v1.30.2 docker true true} ...
	I0701 12:24:25.613677  653531 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-735960 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.16
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-735960 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0701 12:24:25.613736  653531 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0701 12:24:25.636959  653531 cni.go:84] Creating CNI manager for ""
	I0701 12:24:25.636987  653531 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0701 12:24:25.637001  653531 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0701 12:24:25.637033  653531 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.16 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-735960 NodeName:ha-735960 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.16"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.16 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0701 12:24:25.637207  653531 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.16
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-735960"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.16
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.16"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0701 12:24:25.637234  653531 kube-vip.go:115] generating kube-vip config ...
	I0701 12:24:25.637291  653531 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0701 12:24:25.651059  653531 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0701 12:24:25.651192  653531 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0701 12:24:25.651261  653531 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0701 12:24:25.660952  653531 binaries.go:44] Found k8s binaries, skipping transfer
	I0701 12:24:25.661049  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0701 12:24:25.669901  653531 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0701 12:24:25.685801  653531 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0701 12:24:25.701259  653531 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0701 12:24:25.717237  653531 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0701 12:24:25.732682  653531 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0701 12:24:25.736549  653531 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 12:24:25.748348  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:24:25.857797  653531 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0701 12:24:25.874307  653531 certs.go:68] Setting up /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960 for IP: 192.168.39.16
	I0701 12:24:25.874340  653531 certs.go:194] generating shared ca certs ...
	I0701 12:24:25.874365  653531 certs.go:226] acquiring lock for ca certs: {Name:mk34e166bfd069e523b2325e14d1812c523bff53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:24:25.874584  653531 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key
	I0701 12:24:25.874645  653531 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key
	I0701 12:24:25.874659  653531 certs.go:256] generating profile certs ...
	I0701 12:24:25.874733  653531 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/client.key
	I0701 12:24:25.874814  653531 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key.5c21f4af
	I0701 12:24:25.874868  653531 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.key
	I0701 12:24:25.874883  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0701 12:24:25.874918  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0701 12:24:25.874937  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0701 12:24:25.874955  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0701 12:24:25.874972  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0701 12:24:25.874991  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0701 12:24:25.875008  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0701 12:24:25.875025  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0701 12:24:25.875093  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem (1338 bytes)
	W0701 12:24:25.875146  653531 certs.go:480] ignoring /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854_empty.pem, impossibly tiny 0 bytes
	I0701 12:24:25.875161  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem (1675 bytes)
	I0701 12:24:25.875193  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem (1078 bytes)
	I0701 12:24:25.875224  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem (1123 bytes)
	I0701 12:24:25.875261  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem (1675 bytes)
	I0701 12:24:25.875343  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem (1708 bytes)
	I0701 12:24:25.875386  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> /usr/share/ca-certificates/6378542.pem
	I0701 12:24:25.875409  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:24:25.875426  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem -> /usr/share/ca-certificates/637854.pem
	I0701 12:24:25.876083  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0701 12:24:25.910761  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0701 12:24:25.938480  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0701 12:24:25.963281  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0701 12:24:25.989413  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0701 12:24:26.015055  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0701 12:24:26.039406  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0701 12:24:26.062955  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0701 12:24:26.093960  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem --> /usr/share/ca-certificates/6378542.pem (1708 bytes)
	I0701 12:24:26.125896  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0701 12:24:26.156031  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem --> /usr/share/ca-certificates/637854.pem (1338 bytes)
	I0701 12:24:26.181375  653531 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0701 12:24:26.209470  653531 ssh_runner.go:195] Run: openssl version
	I0701 12:24:26.218386  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6378542.pem && ln -fs /usr/share/ca-certificates/6378542.pem /etc/ssl/certs/6378542.pem"
	I0701 12:24:26.233243  653531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6378542.pem
	I0701 12:24:26.241811  653531 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  1 12:11 /usr/share/ca-certificates/6378542.pem
	I0701 12:24:26.241888  653531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6378542.pem
	I0701 12:24:26.250559  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6378542.pem /etc/ssl/certs/3ec20f2e.0"
	I0701 12:24:26.277768  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0701 12:24:26.305594  653531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:24:26.315685  653531 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  1 12:05 /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:24:26.315763  653531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:24:26.330923  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0701 12:24:26.351095  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/637854.pem && ln -fs /usr/share/ca-certificates/637854.pem /etc/ssl/certs/637854.pem"
	I0701 12:24:26.374355  653531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/637854.pem
	I0701 12:24:26.380759  653531 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  1 12:11 /usr/share/ca-certificates/637854.pem
	I0701 12:24:26.380836  653531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/637854.pem
	I0701 12:24:26.392584  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/637854.pem /etc/ssl/certs/51391683.0"
	I0701 12:24:26.411160  653531 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0701 12:24:26.419483  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0701 12:24:26.437558  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0701 12:24:26.444826  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0701 12:24:26.454628  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0701 12:24:26.467473  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0701 12:24:26.476039  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0701 12:24:26.482296  653531 kubeadm.go:391] StartCluster: {Name:ha-735960 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-735960 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.60 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false
freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 12:24:26.482508  653531 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0701 12:24:26.498609  653531 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0701 12:24:26.509374  653531 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0701 12:24:26.509403  653531 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0701 12:24:26.509410  653531 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0701 12:24:26.509466  653531 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0701 12:24:26.518865  653531 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0701 12:24:26.519310  653531 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-735960" does not appear in /home/jenkins/minikube-integration/19166-630650/kubeconfig
	I0701 12:24:26.519460  653531 kubeconfig.go:62] /home/jenkins/minikube-integration/19166-630650/kubeconfig needs updating (will repair): [kubeconfig missing "ha-735960" cluster setting kubeconfig missing "ha-735960" context setting]
	I0701 12:24:26.519772  653531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19166-630650/kubeconfig: {Name:mke3ef9d019eff4edd273b00c416fd77ed009242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:24:26.520253  653531 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19166-630650/kubeconfig
	I0701 12:24:26.520566  653531 kapi.go:59] client config for ha-735960: &rest.Config{Host:"https://192.168.39.16:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/client.crt", KeyFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/client.key", CAFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(ni
l)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfbb80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0701 12:24:26.521041  653531 cert_rotation.go:137] Starting client certificate rotation controller
	I0701 12:24:26.521235  653531 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0701 12:24:26.530555  653531 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.16
	I0701 12:24:26.530586  653531 kubeadm.go:591] duration metric: took 21.167521ms to restartPrimaryControlPlane
	I0701 12:24:26.530596  653531 kubeadm.go:393] duration metric: took 48.31583ms to StartCluster
	I0701 12:24:26.530618  653531 settings.go:142] acquiring lock: {Name:mk6f7c85ea77a73ff0ac851454721f2e6e309153 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:24:26.530700  653531 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19166-630650/kubeconfig
	I0701 12:24:26.531272  653531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19166-630650/kubeconfig: {Name:mke3ef9d019eff4edd273b00c416fd77ed009242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:24:26.531528  653531 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:24:26.531554  653531 start.go:240] waiting for startup goroutines ...
	I0701 12:24:26.531572  653531 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0701 12:24:26.531767  653531 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:24:26.534496  653531 out.go:177] * Enabled addons: 
	I0701 12:24:26.535873  653531 addons.go:510] duration metric: took 4.304011ms for enable addons: enabled=[]
	I0701 12:24:26.535915  653531 start.go:245] waiting for cluster config update ...
	I0701 12:24:26.535925  653531 start.go:254] writing updated cluster config ...
	I0701 12:24:26.537498  653531 out.go:177] 
	I0701 12:24:26.539211  653531 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:24:26.539336  653531 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
	I0701 12:24:26.541509  653531 out.go:177] * Starting "ha-735960-m02" control-plane node in "ha-735960" cluster
	I0701 12:24:26.542802  653531 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 12:24:26.542833  653531 cache.go:56] Caching tarball of preloaded images
	I0701 12:24:26.542967  653531 preload.go:173] Found /home/jenkins/minikube-integration/19166-630650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0701 12:24:26.542983  653531 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0701 12:24:26.543093  653531 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
	I0701 12:24:26.543293  653531 start.go:360] acquireMachinesLock for ha-735960-m02: {Name:mk43a6c0c0c15a237623df377ad65b82186c0ad8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:24:26.543355  653531 start.go:364] duration metric: took 39.786µs to acquireMachinesLock for "ha-735960-m02"
	I0701 12:24:26.543382  653531 start.go:96] Skipping create...Using existing machine configuration
	I0701 12:24:26.543393  653531 fix.go:54] fixHost starting: m02
	I0701 12:24:26.543665  653531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:24:26.543694  653531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:24:26.558741  653531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34329
	I0701 12:24:26.559300  653531 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:24:26.559767  653531 main.go:141] libmachine: Using API Version  1
	I0701 12:24:26.559790  653531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:24:26.560107  653531 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:24:26.560324  653531 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:24:26.560471  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetState
	I0701 12:24:26.561903  653531 fix.go:112] recreateIfNeeded on ha-735960-m02: state=Stopped err=<nil>
	I0701 12:24:26.561933  653531 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	W0701 12:24:26.562104  653531 fix.go:138] unexpected machine state, will restart: <nil>
	I0701 12:24:26.564118  653531 out.go:177] * Restarting existing kvm2 VM for "ha-735960-m02" ...
	I0701 12:24:26.565547  653531 main.go:141] libmachine: (ha-735960-m02) Calling .Start
	I0701 12:24:26.565742  653531 main.go:141] libmachine: (ha-735960-m02) Ensuring networks are active...
	I0701 12:24:26.566439  653531 main.go:141] libmachine: (ha-735960-m02) Ensuring network default is active
	I0701 12:24:26.566739  653531 main.go:141] libmachine: (ha-735960-m02) Ensuring network mk-ha-735960 is active
	I0701 12:24:26.567095  653531 main.go:141] libmachine: (ha-735960-m02) Getting domain xml...
	I0701 12:24:26.567681  653531 main.go:141] libmachine: (ha-735960-m02) Creating domain...
	I0701 12:24:27.772734  653531 main.go:141] libmachine: (ha-735960-m02) Waiting to get IP...
	I0701 12:24:27.773478  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:27.773801  653531 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:24:27.773853  653531 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:24:27.773777  653719 retry.go:31] will retry after 217.058414ms: waiting for machine to come up
	I0701 12:24:27.992187  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:27.992715  653531 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:24:27.992745  653531 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:24:27.992653  653719 retry.go:31] will retry after 295.156992ms: waiting for machine to come up
	I0701 12:24:28.289101  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:28.289597  653531 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:24:28.289630  653531 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:24:28.289531  653719 retry.go:31] will retry after 353.406325ms: waiting for machine to come up
	I0701 12:24:28.644006  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:28.644479  653531 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:24:28.644510  653531 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:24:28.644437  653719 retry.go:31] will retry after 398.224689ms: waiting for machine to come up
	I0701 12:24:29.044072  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:29.044514  653531 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:24:29.044545  653531 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:24:29.044461  653719 retry.go:31] will retry after 547.020131ms: waiting for machine to come up
	I0701 12:24:29.593264  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:29.593690  653531 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:24:29.593709  653531 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:24:29.593653  653719 retry.go:31] will retry after 787.756844ms: waiting for machine to come up
	I0701 12:24:30.382731  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:30.383180  653531 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:24:30.383209  653531 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:24:30.383137  653719 retry.go:31] will retry after 870.067991ms: waiting for machine to come up
	I0701 12:24:31.254672  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:31.255252  653531 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:24:31.255285  653531 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:24:31.255205  653719 retry.go:31] will retry after 1.371479719s: waiting for machine to come up
	I0701 12:24:32.628605  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:32.629092  653531 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:24:32.629124  653531 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:24:32.629036  653719 retry.go:31] will retry after 1.347043223s: waiting for machine to come up
	I0701 12:24:33.978739  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:33.979246  653531 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:24:33.979275  653531 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:24:33.979195  653719 retry.go:31] will retry after 2.257830197s: waiting for machine to come up
	I0701 12:24:36.239828  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:36.240400  653531 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:24:36.240433  653531 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:24:36.240355  653719 retry.go:31] will retry after 2.834526493s: waiting for machine to come up
	I0701 12:24:39.078121  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:39.078416  653531 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
	I0701 12:24:39.078448  653531 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:24:39.078379  653719 retry.go:31] will retry after 2.465969863s: waiting for machine to come up
	I0701 12:24:41.547043  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:41.547535  653531 main.go:141] libmachine: (ha-735960-m02) Found IP for machine: 192.168.39.86
	I0701 12:24:41.547569  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has current primary IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:41.547579  653531 main.go:141] libmachine: (ha-735960-m02) Reserving static IP address...
	I0701 12:24:41.547991  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "ha-735960-m02", mac: "52:54:00:0b:2f:ce", ip: "192.168.39.86"} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:41.548015  653531 main.go:141] libmachine: (ha-735960-m02) Reserved static IP address: 192.168.39.86
	I0701 12:24:41.548032  653531 main.go:141] libmachine: (ha-735960-m02) DBG | skip adding static IP to network mk-ha-735960 - found existing host DHCP lease matching {name: "ha-735960-m02", mac: "52:54:00:0b:2f:ce", ip: "192.168.39.86"}
	I0701 12:24:41.548045  653531 main.go:141] libmachine: (ha-735960-m02) DBG | Getting to WaitForSSH function...
	I0701 12:24:41.548059  653531 main.go:141] libmachine: (ha-735960-m02) Waiting for SSH to be available...
	I0701 12:24:41.550171  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:41.550523  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:41.550552  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:41.550644  653531 main.go:141] libmachine: (ha-735960-m02) DBG | Using SSH client type: external
	I0701 12:24:41.550670  653531 main.go:141] libmachine: (ha-735960-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa (-rw-------)
	I0701 12:24:41.550719  653531 main.go:141] libmachine: (ha-735960-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.86 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0701 12:24:41.550739  653531 main.go:141] libmachine: (ha-735960-m02) DBG | About to run SSH command:
	I0701 12:24:41.550754  653531 main.go:141] libmachine: (ha-735960-m02) DBG | exit 0
	I0701 12:24:41.678305  653531 main.go:141] libmachine: (ha-735960-m02) DBG | SSH cmd err, output: <nil>: 
	I0701 12:24:41.678691  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetConfigRaw
	I0701 12:24:41.679334  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetIP
	I0701 12:24:41.682006  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:41.682508  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:41.682540  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:41.682792  653531 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
	I0701 12:24:41.683005  653531 machine.go:94] provisionDockerMachine start ...
	I0701 12:24:41.683030  653531 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:24:41.683290  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:24:41.685599  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:41.685951  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:41.685979  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:41.686153  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:24:41.686378  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:41.686551  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:41.686684  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:24:41.686822  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:24:41.687030  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0701 12:24:41.687043  653531 main.go:141] libmachine: About to run SSH command:
	hostname
	I0701 12:24:41.802622  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0701 12:24:41.802657  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetMachineName
	I0701 12:24:41.802940  653531 buildroot.go:166] provisioning hostname "ha-735960-m02"
	I0701 12:24:41.802963  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetMachineName
	I0701 12:24:41.803281  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:24:41.805937  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:41.806443  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:41.806470  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:41.806608  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:24:41.806785  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:41.807003  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:41.807154  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:24:41.807371  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:24:41.807554  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0701 12:24:41.807567  653531 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-735960-m02 && echo "ha-735960-m02" | sudo tee /etc/hostname
	I0701 12:24:41.938306  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-735960-m02
	
	I0701 12:24:41.938353  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:24:41.941077  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:41.941535  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:41.941592  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:41.941765  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:24:41.941994  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:41.942161  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:41.942290  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:24:41.942491  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:24:41.942676  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0701 12:24:41.942701  653531 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-735960-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-735960-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-735960-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0701 12:24:42.062715  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0701 12:24:42.062750  653531 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19166-630650/.minikube CaCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19166-630650/.minikube}
	I0701 12:24:42.062772  653531 buildroot.go:174] setting up certificates
	I0701 12:24:42.062785  653531 provision.go:84] configureAuth start
	I0701 12:24:42.062795  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetMachineName
	I0701 12:24:42.063134  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetIP
	I0701 12:24:42.065907  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:42.066246  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:42.066279  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:42.066490  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:24:42.068450  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:42.068818  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:42.068843  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:42.068957  653531 provision.go:143] copyHostCerts
	I0701 12:24:42.068988  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem
	I0701 12:24:42.069022  653531 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem, removing ...
	I0701 12:24:42.069030  653531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem
	I0701 12:24:42.069082  653531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem (1078 bytes)
	I0701 12:24:42.069156  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem
	I0701 12:24:42.069173  653531 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem, removing ...
	I0701 12:24:42.069180  653531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem
	I0701 12:24:42.069199  653531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem (1123 bytes)
	I0701 12:24:42.069241  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem
	I0701 12:24:42.069257  653531 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem, removing ...
	I0701 12:24:42.069263  653531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem
	I0701 12:24:42.069279  653531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem (1675 bytes)
	I0701 12:24:42.069326  653531 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem org=jenkins.ha-735960-m02 san=[127.0.0.1 192.168.39.86 ha-735960-m02 localhost minikube]
	I0701 12:24:42.315961  653531 provision.go:177] copyRemoteCerts
	I0701 12:24:42.316035  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0701 12:24:42.316061  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:24:42.318992  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:42.319361  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:42.319395  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:42.319557  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:24:42.319740  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:42.319969  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:24:42.320092  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa Username:docker}
	I0701 12:24:42.408924  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0701 12:24:42.408999  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0701 12:24:42.434942  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0701 12:24:42.435038  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0701 12:24:42.458628  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0701 12:24:42.458728  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0701 12:24:42.482505  653531 provision.go:87] duration metric: took 419.705556ms to configureAuth
	I0701 12:24:42.482536  653531 buildroot.go:189] setting minikube options for container-runtime
	I0701 12:24:42.482760  653531 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:24:42.482797  653531 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:24:42.483103  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:24:42.485829  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:42.486249  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:42.486277  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:42.486574  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:24:42.486846  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:42.487031  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:42.487211  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:24:42.487420  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:24:42.487596  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0701 12:24:42.487608  653531 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0701 12:24:42.603937  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0701 12:24:42.603962  653531 buildroot.go:70] root file system type: tmpfs
	I0701 12:24:42.604101  653531 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0701 12:24:42.604123  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:24:42.606937  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:42.607326  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:42.607351  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:42.607512  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:24:42.607762  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:42.607935  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:42.608131  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:24:42.608318  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:24:42.608490  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0701 12:24:42.608578  653531 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.39.16"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0701 12:24:42.731927  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.39.16
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0701 12:24:42.731963  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:24:42.735092  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:42.735552  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:42.735586  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:42.735721  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:24:42.735916  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:42.736097  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:42.736226  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:24:42.736425  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:24:42.736596  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0701 12:24:42.736613  653531 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0701 12:24:44.641546  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0701 12:24:44.641584  653531 machine.go:97] duration metric: took 2.958558644s to provisionDockerMachine
	I0701 12:24:44.641601  653531 start.go:293] postStartSetup for "ha-735960-m02" (driver="kvm2")
	I0701 12:24:44.641615  653531 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0701 12:24:44.641637  653531 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:24:44.642004  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0701 12:24:44.642040  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:24:44.645224  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:44.645706  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:44.645738  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:44.645868  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:24:44.646053  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:44.646222  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:24:44.646376  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa Username:docker}
	I0701 12:24:44.736407  653531 ssh_runner.go:195] Run: cat /etc/os-release
	I0701 12:24:44.740656  653531 info.go:137] Remote host: Buildroot 2023.02.9
	I0701 12:24:44.740682  653531 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-630650/.minikube/addons for local assets ...
	I0701 12:24:44.740758  653531 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-630650/.minikube/files for local assets ...
	I0701 12:24:44.740835  653531 filesync.go:149] local asset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> 6378542.pem in /etc/ssl/certs
	I0701 12:24:44.740848  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> /etc/ssl/certs/6378542.pem
	I0701 12:24:44.740945  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0701 12:24:44.749928  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem --> /etc/ssl/certs/6378542.pem (1708 bytes)
	I0701 12:24:44.772391  653531 start.go:296] duration metric: took 130.772957ms for postStartSetup
	I0701 12:24:44.772467  653531 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:24:44.772787  653531 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0701 12:24:44.772824  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:24:44.775217  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:44.775582  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:44.775607  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:44.775804  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:24:44.776027  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:44.776203  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:24:44.776383  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa Username:docker}
	I0701 12:24:44.864587  653531 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0701 12:24:44.864665  653531 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0701 12:24:44.904439  653531 fix.go:56] duration metric: took 18.361036234s for fixHost
	I0701 12:24:44.904495  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:24:44.907382  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:44.907911  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:44.907944  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:44.908260  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:24:44.908504  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:44.908689  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:44.908847  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:24:44.909036  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:24:44.909257  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0701 12:24:44.909273  653531 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0701 12:24:45.022815  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719836684.998547011
	
	I0701 12:24:45.022845  653531 fix.go:216] guest clock: 1719836684.998547011
	I0701 12:24:45.022855  653531 fix.go:229] Guest: 2024-07-01 12:24:44.998547011 +0000 UTC Remote: 2024-07-01 12:24:44.904469964 +0000 UTC m=+42.374321626 (delta=94.077047ms)
	I0701 12:24:45.022878  653531 fix.go:200] guest clock delta is within tolerance: 94.077047ms
	I0701 12:24:45.022885  653531 start.go:83] releasing machines lock for "ha-735960-m02", held for 18.479517819s
	I0701 12:24:45.022904  653531 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:24:45.023158  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetIP
	I0701 12:24:45.025946  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:45.026429  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:45.026468  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:45.028669  653531 out.go:177] * Found network options:
	I0701 12:24:45.030344  653531 out.go:177]   - NO_PROXY=192.168.39.16
	W0701 12:24:45.031921  653531 proxy.go:119] fail to check proxy env: Error ip not in block
	I0701 12:24:45.031959  653531 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:24:45.032658  653531 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:24:45.032888  653531 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
	I0701 12:24:45.033013  653531 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0701 12:24:45.033058  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	W0701 12:24:45.033081  653531 proxy.go:119] fail to check proxy env: Error ip not in block
	I0701 12:24:45.033171  653531 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0701 12:24:45.033195  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
	I0701 12:24:45.035752  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:45.035981  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:45.036219  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:45.036245  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:45.036348  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:45.036378  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:45.036406  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:24:45.036593  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
	I0701 12:24:45.036652  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:45.036754  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
	I0701 12:24:45.036826  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:24:45.036903  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
	I0701 12:24:45.036969  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa Username:docker}
	I0701 12:24:45.037025  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa Username:docker}
	W0701 12:24:45.137872  653531 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0701 12:24:45.137946  653531 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0701 12:24:45.154683  653531 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0701 12:24:45.154717  653531 start.go:494] detecting cgroup driver to use...
	I0701 12:24:45.154827  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 12:24:45.176886  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0701 12:24:45.188345  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0701 12:24:45.197947  653531 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0701 12:24:45.198012  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0701 12:24:45.207676  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0701 12:24:45.217559  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0701 12:24:45.227803  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0701 12:24:45.238295  653531 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0701 12:24:45.248764  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0701 12:24:45.258909  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0701 12:24:45.268726  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0701 12:24:45.279039  653531 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0701 12:24:45.288042  653531 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0701 12:24:45.296914  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:24:45.411404  653531 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0701 12:24:45.436012  653531 start.go:494] detecting cgroup driver to use...
	I0701 12:24:45.436122  653531 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0701 12:24:45.450142  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 12:24:45.462829  653531 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0701 12:24:45.481152  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 12:24:45.494283  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 12:24:45.507074  653531 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0701 12:24:45.534155  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 12:24:45.547185  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 12:24:45.564773  653531 ssh_runner.go:195] Run: which cri-dockerd
	I0701 12:24:45.568760  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0701 12:24:45.577542  653531 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0701 12:24:45.593021  653531 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0701 12:24:45.701211  653531 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0701 12:24:45.815750  653531 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0701 12:24:45.815810  653531 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0701 12:24:45.831989  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:24:45.941168  653531 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0701 12:24:48.340550  653531 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.399331326s)
	I0701 12:24:48.340643  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0701 12:24:48.354582  653531 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0701 12:24:48.370449  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0701 12:24:48.383634  653531 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0701 12:24:48.491334  653531 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0701 12:24:48.612412  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:24:48.742773  653531 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0701 12:24:48.759856  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0701 12:24:48.772621  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:24:48.884376  653531 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0701 12:24:48.964457  653531 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0701 12:24:48.964538  653531 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0701 12:24:48.970016  653531 start.go:562] Will wait 60s for crictl version
	I0701 12:24:48.970082  653531 ssh_runner.go:195] Run: which crictl
	I0701 12:24:48.974017  653531 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0701 12:24:49.010380  653531 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.1
	RuntimeApiVersion:  v1
	I0701 12:24:49.010470  653531 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0701 12:24:49.038204  653531 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0701 12:24:49.060452  653531 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.1 ...
	I0701 12:24:49.061662  653531 out.go:177]   - env NO_PROXY=192.168.39.16
	I0701 12:24:49.062894  653531 main.go:141] libmachine: (ha-735960-m02) Calling .GetIP
	I0701 12:24:49.065420  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:49.065726  653531 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:36 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
	I0701 12:24:49.065756  653531 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
	I0701 12:24:49.065973  653531 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0701 12:24:49.070110  653531 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 12:24:49.082188  653531 mustload.go:65] Loading cluster: ha-735960
	I0701 12:24:49.082530  653531 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:24:49.082941  653531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:24:49.082993  653531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:24:49.097892  653531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43153
	I0701 12:24:49.098396  653531 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:24:49.098894  653531 main.go:141] libmachine: Using API Version  1
	I0701 12:24:49.098917  653531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:24:49.099215  653531 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:24:49.099436  653531 main.go:141] libmachine: (ha-735960) Calling .GetState
	I0701 12:24:49.100798  653531 host.go:66] Checking if "ha-735960" exists ...
	I0701 12:24:49.101079  653531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:24:49.101112  653531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:24:49.115736  653531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34567
	I0701 12:24:49.116185  653531 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:24:49.116654  653531 main.go:141] libmachine: Using API Version  1
	I0701 12:24:49.116678  653531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:24:49.117007  653531 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:24:49.117203  653531 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:24:49.117366  653531 certs.go:68] Setting up /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960 for IP: 192.168.39.86
	I0701 12:24:49.117380  653531 certs.go:194] generating shared ca certs ...
	I0701 12:24:49.117398  653531 certs.go:226] acquiring lock for ca certs: {Name:mk34e166bfd069e523b2325e14d1812c523bff53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:24:49.117551  653531 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key
	I0701 12:24:49.117591  653531 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key
	I0701 12:24:49.117600  653531 certs.go:256] generating profile certs ...
	I0701 12:24:49.117669  653531 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/client.key
	I0701 12:24:49.117728  653531 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key.b19d6c48
	I0701 12:24:49.117760  653531 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.key
	I0701 12:24:49.117771  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0701 12:24:49.117786  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0701 12:24:49.117800  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0701 12:24:49.117811  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0701 12:24:49.117823  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0701 12:24:49.117835  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0701 12:24:49.117847  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0701 12:24:49.117858  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0701 12:24:49.117903  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem (1338 bytes)
	W0701 12:24:49.117934  653531 certs.go:480] ignoring /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854_empty.pem, impossibly tiny 0 bytes
	I0701 12:24:49.117946  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem (1675 bytes)
	I0701 12:24:49.117973  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem (1078 bytes)
	I0701 12:24:49.117994  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem (1123 bytes)
	I0701 12:24:49.118013  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem (1675 bytes)
	I0701 12:24:49.118048  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem (1708 bytes)
	I0701 12:24:49.118076  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem -> /usr/share/ca-certificates/637854.pem
	I0701 12:24:49.118092  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> /usr/share/ca-certificates/6378542.pem
	I0701 12:24:49.118104  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:24:49.118150  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:24:49.120907  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:49.121392  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:24:49.121418  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:24:49.121523  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:24:49.121694  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:24:49.121825  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:24:49.121959  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa Username:docker}
	I0701 12:24:49.190715  653531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0701 12:24:49.195755  653531 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0701 12:24:49.206197  653531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0701 12:24:49.209869  653531 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0701 12:24:49.219170  653531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0701 12:24:49.223114  653531 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0701 12:24:49.233000  653531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0701 12:24:49.237162  653531 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0701 12:24:49.246812  653531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0701 12:24:49.250554  653531 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0701 12:24:49.259926  653531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0701 12:24:49.263843  653531 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0701 12:24:49.274536  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0701 12:24:49.299467  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0701 12:24:49.322887  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0701 12:24:49.345311  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0701 12:24:49.367988  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0701 12:24:49.390632  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0701 12:24:49.416047  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0701 12:24:49.439560  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0701 12:24:49.462382  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem --> /usr/share/ca-certificates/637854.pem (1338 bytes)
	I0701 12:24:49.484590  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem --> /usr/share/ca-certificates/6378542.pem (1708 bytes)
	I0701 12:24:49.507507  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0701 12:24:49.529932  653531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0701 12:24:49.545966  653531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0701 12:24:49.561557  653531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0701 12:24:49.577402  653531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0701 12:24:49.593250  653531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0701 12:24:49.609739  653531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0701 12:24:49.626015  653531 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0701 12:24:49.643897  653531 ssh_runner.go:195] Run: openssl version
	I0701 12:24:49.649608  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/637854.pem && ln -fs /usr/share/ca-certificates/637854.pem /etc/ssl/certs/637854.pem"
	I0701 12:24:49.660203  653531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/637854.pem
	I0701 12:24:49.664449  653531 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  1 12:11 /usr/share/ca-certificates/637854.pem
	I0701 12:24:49.664503  653531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/637854.pem
	I0701 12:24:49.670228  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/637854.pem /etc/ssl/certs/51391683.0"
	I0701 12:24:49.680554  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6378542.pem && ln -fs /usr/share/ca-certificates/6378542.pem /etc/ssl/certs/6378542.pem"
	I0701 12:24:49.690901  653531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6378542.pem
	I0701 12:24:49.695200  653531 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  1 12:11 /usr/share/ca-certificates/6378542.pem
	I0701 12:24:49.695266  653531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6378542.pem
	I0701 12:24:49.700503  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6378542.pem /etc/ssl/certs/3ec20f2e.0"
	I0701 12:24:49.710442  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0701 12:24:49.720297  653531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:24:49.724530  653531 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  1 12:05 /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:24:49.724590  653531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:24:49.729832  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0701 12:24:49.739574  653531 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0701 12:24:49.743717  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0701 12:24:49.749498  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0701 12:24:49.755217  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0701 12:24:49.761210  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0701 12:24:49.767138  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0701 12:24:49.772853  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0701 12:24:49.778598  653531 kubeadm.go:928] updating node {m02 192.168.39.86 8443 v1.30.2 docker true true} ...
	I0701 12:24:49.778706  653531 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-735960-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.86
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-735960 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0701 12:24:49.778735  653531 kube-vip.go:115] generating kube-vip config ...
	I0701 12:24:49.778769  653531 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0701 12:24:49.792722  653531 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0701 12:24:49.792794  653531 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0701 12:24:49.792861  653531 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0701 12:24:49.804161  653531 binaries.go:44] Found k8s binaries, skipping transfer
	I0701 12:24:49.804241  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0701 12:24:49.814550  653531 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0701 12:24:49.831390  653531 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0701 12:24:49.848397  653531 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0701 12:24:49.865443  653531 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0701 12:24:49.869104  653531 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 12:24:49.880669  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:24:49.995061  653531 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0701 12:24:50.012084  653531 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:24:50.012461  653531 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:24:50.014165  653531 out.go:177] * Verifying Kubernetes components...
	I0701 12:24:50.015753  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:24:50.164868  653531 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0701 12:24:50.189841  653531 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19166-630650/kubeconfig
	I0701 12:24:50.190056  653531 kapi.go:59] client config for ha-735960: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/client.crt", KeyFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/client.key", CAFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfbb80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0701 12:24:50.190130  653531 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.16:8443
	I0701 12:24:50.190323  653531 node_ready.go:35] waiting up to 6m0s for node "ha-735960-m02" to be "Ready" ...
	I0701 12:24:50.190456  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:24:50.190466  653531 round_trippers.go:469] Request Headers:
	I0701 12:24:50.190477  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:24:50.190487  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:24:54.343288  653531 round_trippers.go:574] Response Status:  in 4152 milliseconds
	I0701 12:24:55.343662  653531 with_retry.go:234] Got a Retry-After 1s response for attempt 1 to https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:24:55.343730  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:24:55.343744  653531 round_trippers.go:469] Request Headers:
	I0701 12:24:55.343754  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:24:55.343758  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:24:55.344302  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:24:55.344422  653531 node_ready.go:53] error getting node "ha-735960-m02": Get "https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.1:52872->192.168.39.16:8443: read: connection reset by peer
	I0701 12:24:55.344514  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:24:55.344528  653531 round_trippers.go:469] Request Headers:
	I0701 12:24:55.344538  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:24:55.344544  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:24:55.344874  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:24:55.691490  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:24:55.691516  653531 round_trippers.go:469] Request Headers:
	I0701 12:24:55.691527  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:24:55.691533  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:24:55.691976  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:24:56.190655  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:24:56.190680  653531 round_trippers.go:469] Request Headers:
	I0701 12:24:56.190689  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:24:56.190694  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:24:56.191223  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:24:56.690634  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:24:56.690660  653531 round_trippers.go:469] Request Headers:
	I0701 12:24:56.690669  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:24:56.690672  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:24:56.691171  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:24:57.190543  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:24:57.190576  653531 round_trippers.go:469] Request Headers:
	I0701 12:24:57.190588  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:24:57.190593  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:24:57.191164  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:24:57.691155  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:24:57.691185  653531 round_trippers.go:469] Request Headers:
	I0701 12:24:57.691197  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:24:57.691205  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:24:57.691722  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:24:57.691807  653531 node_ready.go:53] error getting node "ha-735960-m02": Get "https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02": dial tcp 192.168.39.16:8443: connect: connection refused
	I0701 12:24:58.190799  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:24:58.190827  653531 round_trippers.go:469] Request Headers:
	I0701 12:24:58.190841  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:24:58.190847  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:24:58.191262  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:24:58.690909  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:24:58.690934  653531 round_trippers.go:469] Request Headers:
	I0701 12:24:58.690943  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:24:58.690947  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:24:58.691435  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:24:59.191343  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:24:59.191369  653531 round_trippers.go:469] Request Headers:
	I0701 12:24:59.191379  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:24:59.191385  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:24:59.191790  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:24:59.691540  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:24:59.691570  653531 round_trippers.go:469] Request Headers:
	I0701 12:24:59.691582  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:24:59.691587  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:24:59.692063  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:24:59.692155  653531 node_ready.go:53] error getting node "ha-735960-m02": Get "https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02": dial tcp 192.168.39.16:8443: connect: connection refused
	I0701 12:25:00.190742  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:00.190767  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:00.190776  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:00.190780  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:00.191351  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:00.691648  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:00.691679  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:00.691691  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:00.691697  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:00.692126  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:01.190745  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:01.190769  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:01.190778  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:01.190784  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:01.191282  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:01.691565  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:01.691597  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:01.691614  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:01.691621  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:01.692000  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:02.191662  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:02.191693  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:02.191706  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:02.191714  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:02.192140  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:02.192224  653531 node_ready.go:53] error getting node "ha-735960-m02": Get "https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02": dial tcp 192.168.39.16:8443: connect: connection refused
	I0701 12:25:02.691148  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:02.691173  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:02.691180  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:02.691185  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:02.691566  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:03.190561  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:03.190591  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:03.190603  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:03.190611  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:03.191147  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:03.690811  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:03.690839  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:03.690849  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:03.690854  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:03.691458  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:04.191099  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:04.191130  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:04.191142  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:04.191147  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:04.191609  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:04.691342  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:04.691368  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:04.691376  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:04.691380  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:04.691811  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:04.691897  653531 node_ready.go:53] error getting node "ha-735960-m02": Get "https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02": dial tcp 192.168.39.16:8443: connect: connection refused
	I0701 12:25:05.191508  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:05.191532  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:05.191540  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:05.191550  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:05.192027  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:05.690552  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:05.690579  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:05.690588  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:05.690592  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:05.691114  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:06.190741  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:06.190773  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:06.190785  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:06.190790  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:06.191210  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:06.690600  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:06.690630  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:06.690640  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:06.690646  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:06.691129  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:07.191607  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:07.191631  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:07.191639  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:07.191643  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:07.192193  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:07.192283  653531 node_ready.go:53] error getting node "ha-735960-m02": Get "https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02": dial tcp 192.168.39.16:8443: connect: connection refused
	I0701 12:25:07.691099  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:07.691129  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:07.691140  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:07.691145  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:07.691572  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:08.191598  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:08.191623  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:08.191632  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:08.191636  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:08.192026  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:08.690679  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:08.690702  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:08.690713  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:08.690717  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:08.691142  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:09.190900  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:09.190924  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:09.190932  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:09.190938  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:09.191395  653531 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0701 12:25:09.690594  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:09.690615  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:09.690623  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:09.690629  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:11.690040  653531 round_trippers.go:574] Response Status: 200 OK in 1999 milliseconds
	I0701 12:25:11.702263  653531 node_ready.go:49] node "ha-735960-m02" has status "Ready":"True"
	I0701 12:25:11.702299  653531 node_ready.go:38] duration metric: took 21.511933368s for node "ha-735960-m02" to be "Ready" ...
	I0701 12:25:11.702313  653531 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 12:25:11.702416  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods
	I0701 12:25:11.702430  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:11.702441  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:11.702454  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:11.789461  653531 round_trippers.go:574] Response Status: 200 OK in 86 milliseconds
	I0701 12:25:11.802344  653531 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace to be "Ready" ...
	I0701 12:25:11.802466  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:11.802476  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:11.802483  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:11.802487  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:11.816015  653531 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0701 12:25:11.816768  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:11.816789  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:11.816801  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:11.816808  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:11.831063  653531 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0701 12:25:12.302968  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:12.302992  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:12.303000  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:12.303004  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:12.307067  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:12.308122  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:12.308138  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:12.308146  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:12.308150  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:12.311874  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:12.803638  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:12.803667  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:12.803679  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:12.803686  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:12.814049  653531 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0701 12:25:12.814887  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:12.814910  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:12.814921  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:12.814925  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:12.821738  653531 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0701 12:25:13.303576  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:13.303600  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:13.303608  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:13.303614  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:13.307218  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:13.308090  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:13.308106  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:13.308113  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:13.308117  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:13.311302  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:13.803234  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:13.803266  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:13.803274  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:13.803277  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:13.806287  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:13.807004  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:13.807020  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:13.807029  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:13.807032  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:13.809746  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:13.810211  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:14.302637  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:14.302668  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:14.302676  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:14.302680  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:14.306137  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:14.306904  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:14.306920  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:14.306928  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:14.306932  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:14.309754  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:14.802564  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:14.802587  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:14.802595  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:14.802599  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:14.808775  653531 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0701 12:25:14.809568  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:14.809588  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:14.809596  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:14.809601  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:14.812414  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:15.303353  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:15.303378  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:15.303386  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:15.303391  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:15.306881  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:15.307679  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:15.307702  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:15.307712  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:15.307721  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:15.310551  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:15.802545  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:15.802569  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:15.802577  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:15.802582  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:15.806303  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:15.807445  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:15.807462  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:15.807473  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:15.807479  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:15.813688  653531 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0701 12:25:15.814187  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:16.303627  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:16.303655  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:16.303664  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:16.303667  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:16.307153  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:16.307819  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:16.307838  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:16.307848  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:16.307854  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:16.317298  653531 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0701 12:25:16.802946  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:16.802971  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:16.802979  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:16.802985  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:16.806421  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:16.807100  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:16.807120  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:16.807130  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:16.807135  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:16.809697  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:17.302581  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:17.302628  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:17.302640  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:17.302648  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:17.307226  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:17.307905  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:17.307922  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:17.307929  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:17.307936  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:17.311203  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:17.803470  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:17.803514  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:17.803526  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:17.803531  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:17.812734  653531 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0701 12:25:17.813577  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:17.813595  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:17.813601  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:17.813608  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:17.818648  653531 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0701 12:25:17.819270  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:18.302575  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:18.302597  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:18.302605  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:18.302610  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:18.306847  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:18.307906  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:18.307927  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:18.307937  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:18.307943  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:18.310841  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:18.802657  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:18.802681  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:18.802689  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:18.802692  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:18.805685  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:18.806415  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:18.806434  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:18.806444  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:18.806451  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:18.809781  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:19.303618  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:19.303642  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:19.303650  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:19.303655  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:19.307473  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:19.308257  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:19.308275  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:19.308282  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:19.308286  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:19.311108  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:19.802669  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:19.802691  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:19.802700  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:19.802703  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:19.805915  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:19.806623  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:19.806641  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:19.806648  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:19.806653  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:19.809291  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:20.303135  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:20.303161  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:20.303169  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:20.303173  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:20.306861  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:20.307600  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:20.307618  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:20.307626  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:20.307630  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:20.310953  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:20.311503  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:20.803608  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:20.803633  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:20.803642  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:20.803645  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:20.807878  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:20.808941  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:20.808961  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:20.808969  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:20.808973  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:20.811817  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:21.303623  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:21.303648  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:21.303658  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:21.303662  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:21.307962  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:21.308821  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:21.308839  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:21.308846  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:21.308850  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:21.311792  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:21.803197  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:21.803227  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:21.803239  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:21.803244  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:21.806108  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:21.807085  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:21.807105  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:21.807138  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:21.807147  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:21.809757  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:22.302567  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:22.302593  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:22.302601  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:22.302608  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:22.306177  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:22.307066  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:22.307082  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:22.307091  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:22.307097  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:22.309849  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:22.803488  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:22.803511  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:22.803519  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:22.803523  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:22.807098  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:22.807809  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:22.807828  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:22.807839  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:22.807846  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:22.810906  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:22.811518  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:23.303611  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:23.303700  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:23.303719  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:23.303725  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:23.307759  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:23.308638  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:23.308659  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:23.308669  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:23.308674  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:23.312265  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:23.803188  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:23.803211  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:23.803222  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:23.803227  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:23.808854  653531 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0701 12:25:23.810030  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:23.810047  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:23.810057  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:23.810066  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:23.813689  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:24.303587  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:24.303609  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:24.303617  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:24.303622  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:24.306935  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:24.307770  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:24.307786  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:24.307794  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:24.307798  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:24.318402  653531 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0701 12:25:24.803269  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:24.803292  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:24.803302  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:24.803307  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:24.806559  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:24.807235  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:24.807252  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:24.807259  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:24.807264  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:24.809568  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:25.303424  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:25.303447  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:25.303457  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:25.303462  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:25.306169  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:25.306850  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:25.306869  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:25.306877  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:25.306881  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:25.309797  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:25.310316  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:25.803598  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:25.803625  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:25.803636  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:25.803641  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:25.807180  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:25.808080  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:25.808098  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:25.808106  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:25.808110  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:25.810694  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:26.303736  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:26.303758  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:26.303769  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:26.303774  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:26.307524  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:26.308268  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:26.308293  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:26.308304  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:26.308309  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:26.311520  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:26.803295  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:26.803319  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:26.803328  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:26.803332  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:26.806546  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:26.807183  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:26.807197  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:26.807204  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:26.807208  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:26.809974  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:27.302802  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:27.302827  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:27.302836  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:27.302840  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:27.305889  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:27.306573  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:27.306591  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:27.306598  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:27.306602  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:27.309203  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:27.802871  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:27.802896  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:27.802904  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:27.802908  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:27.806439  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:27.807255  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:27.807275  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:27.807283  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:27.807286  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:27.810137  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:27.810761  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:28.303255  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:28.303283  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:28.303295  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:28.303300  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:28.306809  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:28.307731  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:28.307752  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:28.307762  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:28.307768  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:28.311028  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:28.802544  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:28.802570  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:28.802580  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:28.802585  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:28.805960  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:28.806724  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:28.806740  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:28.806815  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:28.806826  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:28.809472  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:29.303397  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:29.303427  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:29.303438  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:29.303443  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:29.306785  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:29.307565  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:29.307584  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:29.307592  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:29.307596  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:29.310517  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:29.802683  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:29.802709  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:29.802717  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:29.802720  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:29.806680  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:29.807385  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:29.807404  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:29.807414  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:29.807420  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:29.810474  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:29.811143  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:30.303599  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:30.303629  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:30.303639  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:30.303643  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:30.307801  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:30.308475  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:30.308491  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:30.308498  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:30.308503  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:30.311947  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:30.802655  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:30.802680  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:30.802688  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:30.802692  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:30.806031  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:30.806743  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:30.806762  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:30.806769  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:30.806774  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:30.809315  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:31.303311  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:31.303340  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:31.303350  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:31.303354  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:31.306583  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:31.307361  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:31.307384  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:31.307395  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:31.307399  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:31.311058  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:31.802712  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:31.802740  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:31.802749  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:31.802753  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:31.806584  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:31.807317  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:31.807336  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:31.807347  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:31.807361  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:31.810401  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:32.303636  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:32.303663  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:32.303671  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:32.303676  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:32.307011  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:32.307797  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:32.307815  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:32.307825  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:32.307831  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:32.314944  653531 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0701 12:25:32.315492  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:32.802803  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:32.802830  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:32.802838  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:32.802844  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:32.807127  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:32.807884  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:32.807907  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:32.807917  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:32.807922  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:32.811565  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:33.303372  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:33.303399  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:33.303416  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:33.303421  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:33.307271  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:33.307961  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:33.307981  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:33.307988  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:33.308001  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:33.310760  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:33.802604  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:33.802631  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:33.802640  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:33.802643  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:33.806300  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:33.807219  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:33.807238  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:33.807245  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:33.807250  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:33.810578  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:34.303606  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:34.303632  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:34.303640  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:34.303644  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:34.308029  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:34.309132  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:34.309159  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:34.309172  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:34.309180  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:34.313056  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:34.803231  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:34.803261  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:34.803273  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:34.803278  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:34.806971  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:34.807591  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:34.807609  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:34.807617  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:34.807621  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:34.810457  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:34.810998  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:35.303350  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:35.303377  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:35.303386  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:35.303390  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:35.307557  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:35.310343  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:35.310361  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:35.310370  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:35.310374  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:35.314047  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:35.803318  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:35.803343  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:35.803352  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:35.803355  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:35.806663  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:35.807415  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:35.807435  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:35.807451  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:35.807460  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:35.810577  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:36.303513  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:36.303545  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:36.303577  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:36.303584  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:36.307367  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:36.308070  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:36.308089  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:36.308100  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:36.308106  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:36.312298  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:36.803266  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:36.803291  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:36.803299  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:36.803303  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:36.807158  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:36.807888  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:36.807906  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:36.807913  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:36.807918  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:36.811315  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:36.811752  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:37.303051  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:37.303079  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:37.303090  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:37.303094  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:37.307312  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:37.308243  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:37.308264  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:37.308275  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:37.308282  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:37.311883  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:37.802545  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:37.802572  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:37.802581  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:37.802585  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:37.805697  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:37.806592  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:37.806612  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:37.806622  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:37.806627  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:37.809149  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:38.302574  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:38.302602  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:38.302615  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:38.302621  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:38.306531  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:38.307159  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:38.307178  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:38.307189  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:38.307193  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:38.310496  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:38.803467  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:38.803495  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:38.803504  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:38.803509  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:38.807052  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:38.807927  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:38.807944  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:38.807951  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:38.807956  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:38.810712  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:39.302764  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:39.302790  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:39.302801  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:39.302805  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:39.306507  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:39.307614  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:39.307633  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:39.307641  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:39.307645  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:39.311327  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:39.311854  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:39.803193  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:39.803216  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:39.803225  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:39.803229  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:39.806519  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:39.807496  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:39.807515  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:39.807525  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:39.807532  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:39.810711  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:40.303599  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:40.303624  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:40.303633  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:40.303637  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:40.307414  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:40.308201  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:40.308227  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:40.308236  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:40.308242  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:40.313547  653531 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0701 12:25:40.803513  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:40.803535  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:40.803543  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:40.803548  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:40.806979  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:40.807738  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:40.807753  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:40.807761  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:40.807765  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:40.810649  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:41.303319  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:41.303343  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:41.303351  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:41.303355  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:41.307376  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:41.307943  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:41.307958  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:41.307965  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:41.307970  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:41.311161  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:41.803525  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:41.803549  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:41.803556  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:41.803559  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:41.806564  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:41.807431  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:41.807453  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:41.807464  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:41.807470  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:41.810527  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:41.811143  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:42.303619  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:42.303650  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:42.303662  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:42.303670  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:42.307838  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:42.308516  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:42.308536  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:42.308544  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:42.308550  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:42.312418  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:42.803505  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:42.803530  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:42.803540  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:42.803543  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:42.807116  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:42.808027  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:42.808044  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:42.808051  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:42.808055  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:42.810713  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:43.303632  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:43.303654  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:43.303664  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:43.303668  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:43.307247  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:43.307986  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:43.308002  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:43.308009  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:43.308013  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:43.310824  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:43.802592  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:43.802620  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:43.802628  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:43.802632  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:43.806238  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:43.807037  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:43.807059  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:43.807072  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:43.807076  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:43.809889  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:44.302994  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:44.303018  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:44.303026  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:44.303030  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:44.306644  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:44.307454  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:44.307470  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:44.307478  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:44.307482  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:44.311122  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:44.311762  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:44.803237  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:44.803267  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:44.803279  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:44.803286  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:44.807350  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:44.808020  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:44.808038  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:44.808045  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:44.808051  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:44.810846  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:45.302711  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:45.302735  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:45.302744  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:45.302748  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:45.306615  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:45.307478  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:45.307497  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:45.307508  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:45.307514  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:45.310453  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:45.803401  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:45.803428  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:45.803439  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:45.803444  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:45.807308  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:45.808014  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:45.808029  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:45.808036  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:45.808039  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:45.810822  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:46.302557  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:46.302584  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:46.302597  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:46.302601  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:46.306132  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:46.306862  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:46.306879  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:46.306888  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:46.306894  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:46.310611  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:46.803427  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:46.803455  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:46.803467  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:46.803474  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:46.807174  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:46.807896  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:46.807913  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:46.807921  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:46.807924  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:46.810938  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:46.811392  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:47.302820  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:47.302850  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:47.302859  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:47.302863  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:47.306419  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:47.307190  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:47.307211  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:47.307218  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:47.307222  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:47.309980  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:47.803501  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:47.803525  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:47.803534  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:47.803537  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:47.808075  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:47.808877  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:47.808896  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:47.808905  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:47.808910  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:47.815820  653531 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0701 12:25:48.302668  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:48.302699  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:48.302709  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:48.302716  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:48.308126  653531 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0701 12:25:48.308931  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:48.308949  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:48.308960  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:48.308965  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:48.317071  653531 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0701 12:25:48.802646  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:48.802669  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:48.802678  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:48.802682  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:48.807515  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:48.808381  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:48.808403  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:48.808413  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:48.808422  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:48.811034  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:48.811475  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:49.303193  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:49.303217  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:49.303225  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:49.303230  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:49.307574  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:49.308269  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:49.308285  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:49.308293  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:49.308297  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:49.312047  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:49.802745  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:49.802768  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:49.802776  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:49.802780  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:49.806546  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:49.807294  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:49.807313  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:49.807321  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:49.807326  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:49.810700  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:50.303644  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:50.303674  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:50.303684  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:50.303688  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:50.308034  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:50.308788  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:50.308807  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:50.308817  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:50.308823  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:50.313190  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:50.802959  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:50.802983  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:50.802992  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:50.802996  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:50.806875  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:50.807540  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:50.807558  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:50.807566  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:50.807571  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:50.810319  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:51.303292  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:51.303322  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:51.303334  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:51.303339  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:51.307067  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:51.307838  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:51.307858  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:51.307869  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:51.307875  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:51.312843  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:51.313579  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:51.803287  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:51.803312  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:51.803323  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:51.803329  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:51.807231  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:51.807995  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:51.808012  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:51.808020  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:51.808024  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:51.810740  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:52.303605  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:52.303629  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:52.303638  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:52.303643  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:52.306821  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:52.307565  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:52.307584  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:52.307594  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:52.307602  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:52.311075  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:52.803586  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:52.803610  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:52.803619  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:52.803623  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:52.807457  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:52.808236  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:52.808255  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:52.808266  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:52.808272  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:52.811703  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:53.303621  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:53.303644  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:53.303652  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:53.303656  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:53.310115  653531 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0701 12:25:53.310845  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:53.310863  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:53.310874  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:53.310878  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:53.313553  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:53.314016  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:53.803325  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:53.803349  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:53.803357  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:53.803361  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:53.806896  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:53.807585  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:53.807601  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:53.807608  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:53.807613  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:53.810245  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:54.302928  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:54.302952  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:54.302960  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:54.302963  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:54.306523  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:54.307165  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:54.307184  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:54.307195  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:54.307203  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:54.310455  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:54.803344  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:54.803367  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:54.803377  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:54.803380  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:54.806607  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:54.807210  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:54.807225  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:54.807233  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:54.807236  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:54.809746  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:55.303597  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:55.303623  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:55.303633  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:55.303637  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:55.307054  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:55.307759  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:55.307774  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:55.307781  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:55.307788  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:55.313043  653531 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0701 12:25:55.802698  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:55.802725  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:55.802736  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:55.802745  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:55.805918  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:55.806665  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:55.806682  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:55.806690  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:55.806694  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:55.809347  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:55.809833  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:56.303433  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:56.303460  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:56.303471  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:56.303479  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:56.307327  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:56.308094  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:56.308118  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:56.308126  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:56.308130  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:56.311241  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:56.803577  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:56.803605  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:56.803612  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:56.803616  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:56.806932  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:56.807699  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:56.807716  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:56.807724  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:56.807727  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:56.812547  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:57.303545  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:57.303573  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:57.303582  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:57.303586  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:57.307516  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:57.308162  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:57.308179  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:57.308186  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:57.308193  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:57.310961  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:57.803457  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:57.803482  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:57.803493  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:57.803500  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:57.807806  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:57.808679  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:57.808694  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:57.808704  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:57.808711  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:57.811544  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:57.811984  653531 pod_ready.go:102] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"False"
	I0701 12:25:58.303446  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:58.303471  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:58.303480  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:58.303484  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:58.307082  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:58.307737  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:58.307754  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:58.307762  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:58.307770  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:58.310778  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:58.803647  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:58.803671  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:58.803680  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:58.803690  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:58.807621  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:58.808241  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:58.808258  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:58.808266  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:58.808271  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:58.811002  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:59.302934  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:59.302961  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:59.302971  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:59.302976  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:59.306476  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:59.307188  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:59.307205  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:59.307213  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:59.307216  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:59.312012  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:25:59.803004  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:25:59.803028  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:59.803037  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:59.803041  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:59.806220  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:59.807058  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:59.807077  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:59.807083  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:59.807087  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:59.810042  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:59.810618  653531 pod_ready.go:92] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"True"
	I0701 12:25:59.810639  653531 pod_ready.go:81] duration metric: took 48.008262746s for pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace to be "Ready" ...
	I0701 12:25:59.810648  653531 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-p4rtz" in "kube-system" namespace to be "Ready" ...
	I0701 12:25:59.810702  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-p4rtz
	I0701 12:25:59.810709  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:59.810716  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:59.810720  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:59.813396  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:59.813957  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:59.813972  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:59.813979  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:59.813982  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:59.816606  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:59.816994  653531 pod_ready.go:92] pod "coredns-7db6d8ff4d-p4rtz" in "kube-system" namespace has status "Ready":"True"
	I0701 12:25:59.817012  653531 pod_ready.go:81] duration metric: took 6.357752ms for pod "coredns-7db6d8ff4d-p4rtz" in "kube-system" namespace to be "Ready" ...
	I0701 12:25:59.817021  653531 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:25:59.817069  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960
	I0701 12:25:59.817076  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:59.817084  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:59.817090  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:59.819509  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:59.819970  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:25:59.819984  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:59.819991  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:59.819995  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:59.822382  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:59.822919  653531 pod_ready.go:92] pod "etcd-ha-735960" in "kube-system" namespace has status "Ready":"True"
	I0701 12:25:59.822941  653531 pod_ready.go:81] duration metric: took 5.912537ms for pod "etcd-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:25:59.822951  653531 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:25:59.823013  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m02
	I0701 12:25:59.823021  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:59.823028  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:59.823032  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:59.825241  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:59.825771  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:25:59.825785  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:59.825791  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:59.825795  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:59.828111  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:59.828706  653531 pod_ready.go:92] pod "etcd-ha-735960-m02" in "kube-system" namespace has status "Ready":"True"
	I0701 12:25:59.828725  653531 pod_ready.go:81] duration metric: took 5.760203ms for pod "etcd-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:25:59.828740  653531 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:25:59.828804  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:25:59.828813  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:59.828820  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:59.828827  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:59.832068  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:25:59.832863  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:25:59.832878  653531 round_trippers.go:469] Request Headers:
	I0701 12:25:59.832885  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:25:59.832892  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:25:59.835452  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:25:59.835992  653531 pod_ready.go:97] node "ha-735960-m03" hosting pod "etcd-ha-735960-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-735960-m03" has status "Ready":"Unknown"
	I0701 12:25:59.836024  653531 pod_ready.go:81] duration metric: took 7.273472ms for pod "etcd-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	E0701 12:25:59.836031  653531 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-735960-m03" hosting pod "etcd-ha-735960-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-735960-m03" has status "Ready":"Unknown"
	I0701 12:25:59.836046  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:00.003492  653531 request.go:629] Waited for 167.376104ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960
	I0701 12:26:00.003566  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960
	I0701 12:26:00.003574  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:00.003585  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:00.003603  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:00.011681  653531 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0701 12:26:00.203578  653531 request.go:629] Waited for 191.210292ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:26:00.203641  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:26:00.203647  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:00.203654  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:00.203664  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:00.207391  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:00.207910  653531 pod_ready.go:92] pod "kube-apiserver-ha-735960" in "kube-system" namespace has status "Ready":"True"
	I0701 12:26:00.207934  653531 pod_ready.go:81] duration metric: took 371.877302ms for pod "kube-apiserver-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:00.207946  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:00.403020  653531 request.go:629] Waited for 194.98389ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m02
	I0701 12:26:00.403111  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m02
	I0701 12:26:00.403119  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:00.403141  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:00.403168  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:00.406515  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:00.603670  653531 request.go:629] Waited for 196.408497ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:26:00.603756  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:26:00.603766  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:00.603776  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:00.603787  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:00.607641  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:00.608254  653531 pod_ready.go:92] pod "kube-apiserver-ha-735960-m02" in "kube-system" namespace has status "Ready":"True"
	I0701 12:26:00.608279  653531 pod_ready.go:81] duration metric: took 400.3268ms for pod "kube-apiserver-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:00.608290  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:00.803335  653531 request.go:629] Waited for 194.970976ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:26:00.803416  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:26:00.803423  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:00.803432  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:00.803437  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:00.806887  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:01.003849  653531 request.go:629] Waited for 196.371058ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:01.003924  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:01.003931  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:01.003942  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:01.003947  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:01.007167  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:01.007625  653531 pod_ready.go:97] node "ha-735960-m03" hosting pod "kube-apiserver-ha-735960-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-735960-m03" has status "Ready":"Unknown"
	I0701 12:26:01.007649  653531 pod_ready.go:81] duration metric: took 399.353356ms for pod "kube-apiserver-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	E0701 12:26:01.007659  653531 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-735960-m03" hosting pod "kube-apiserver-ha-735960-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-735960-m03" has status "Ready":"Unknown"
	I0701 12:26:01.007667  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:01.203752  653531 request.go:629] Waited for 195.992128ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-735960
	I0701 12:26:01.203816  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-735960
	I0701 12:26:01.203821  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:01.203829  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:01.203835  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:01.207391  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:01.403364  653531 request.go:629] Waited for 195.371527ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:26:01.403446  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:26:01.403452  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:01.403460  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:01.403464  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:01.406768  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:01.407262  653531 pod_ready.go:92] pod "kube-controller-manager-ha-735960" in "kube-system" namespace has status "Ready":"True"
	I0701 12:26:01.407282  653531 pod_ready.go:81] duration metric: took 399.606397ms for pod "kube-controller-manager-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:01.407291  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:01.603806  653531 request.go:629] Waited for 196.426419ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-735960-m02
	I0701 12:26:01.603868  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-735960-m02
	I0701 12:26:01.603877  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:01.603885  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:01.603889  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:01.607133  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:01.803115  653531 request.go:629] Waited for 195.29931ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:26:01.803195  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:26:01.803202  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:01.803213  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:01.803220  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:01.806296  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:01.806997  653531 pod_ready.go:92] pod "kube-controller-manager-ha-735960-m02" in "kube-system" namespace has status "Ready":"True"
	I0701 12:26:01.807020  653531 pod_ready.go:81] duration metric: took 399.723075ms for pod "kube-controller-manager-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:01.807032  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:02.003077  653531 request.go:629] Waited for 195.935538ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-735960-m03
	I0701 12:26:02.003184  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-735960-m03
	I0701 12:26:02.003199  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:02.003212  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:02.003220  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:02.008458  653531 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0701 12:26:02.203469  653531 request.go:629] Waited for 194.368942ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:02.203529  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:02.203535  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:02.203542  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:02.203546  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:02.207148  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:02.207764  653531 pod_ready.go:97] node "ha-735960-m03" hosting pod "kube-controller-manager-ha-735960-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-735960-m03" has status "Ready":"Unknown"
	I0701 12:26:02.207791  653531 pod_ready.go:81] duration metric: took 400.749537ms for pod "kube-controller-manager-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	E0701 12:26:02.207804  653531 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-735960-m03" hosting pod "kube-controller-manager-ha-735960-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-735960-m03" has status "Ready":"Unknown"
	I0701 12:26:02.207816  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-25ssf" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:02.403791  653531 request.go:629] Waited for 195.887211ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-25ssf
	I0701 12:26:02.403858  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-25ssf
	I0701 12:26:02.403864  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:02.403874  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:02.403879  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:02.407843  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:02.603935  653531 request.go:629] Waited for 195.282891ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m04
	I0701 12:26:02.604003  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m04
	I0701 12:26:02.604008  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:02.604017  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:02.604024  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:02.607222  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:02.607681  653531 pod_ready.go:97] node "ha-735960-m04" hosting pod "kube-proxy-25ssf" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-735960-m04" has status "Ready":"Unknown"
	I0701 12:26:02.607701  653531 pod_ready.go:81] duration metric: took 399.872451ms for pod "kube-proxy-25ssf" in "kube-system" namespace to be "Ready" ...
	E0701 12:26:02.607710  653531 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-735960-m04" hosting pod "kube-proxy-25ssf" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-735960-m04" has status "Ready":"Unknown"
	I0701 12:26:02.607715  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-776rt" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:02.803135  653531 request.go:629] Waited for 195.335441ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-776rt
	I0701 12:26:02.803208  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-776rt
	I0701 12:26:02.803214  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:02.803221  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:02.803229  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:02.806089  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:03.004065  653531 request.go:629] Waited for 197.373789ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:03.004141  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:03.004150  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:03.004158  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:03.004174  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:03.007294  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:03.007921  653531 pod_ready.go:97] node "ha-735960-m03" hosting pod "kube-proxy-776rt" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-735960-m03" has status "Ready":"Unknown"
	I0701 12:26:03.007945  653531 pod_ready.go:81] duration metric: took 400.223567ms for pod "kube-proxy-776rt" in "kube-system" namespace to be "Ready" ...
	E0701 12:26:03.007955  653531 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-735960-m03" hosting pod "kube-proxy-776rt" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-735960-m03" has status "Ready":"Unknown"
	I0701 12:26:03.007961  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b6knb" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:03.204042  653531 request.go:629] Waited for 195.997795ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b6knb
	I0701 12:26:03.204129  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b6knb
	I0701 12:26:03.204135  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:03.204143  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:03.204151  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:03.207989  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:03.404038  653531 request.go:629] Waited for 195.374708ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:26:03.404108  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:26:03.404113  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:03.404122  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:03.404127  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:03.407364  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:03.407859  653531 pod_ready.go:92] pod "kube-proxy-b6knb" in "kube-system" namespace has status "Ready":"True"
	I0701 12:26:03.407879  653531 pod_ready.go:81] duration metric: took 399.911763ms for pod "kube-proxy-b6knb" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:03.407889  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lphzn" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:03.603040  653531 request.go:629] Waited for 195.068023ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lphzn
	I0701 12:26:03.603123  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lphzn
	I0701 12:26:03.603128  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:03.603137  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:03.603141  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:03.606547  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:03.803798  653531 request.go:629] Waited for 196.387613ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:26:03.803870  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:26:03.803875  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:03.803883  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:03.803888  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:03.807381  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:03.807877  653531 pod_ready.go:92] pod "kube-proxy-lphzn" in "kube-system" namespace has status "Ready":"True"
	I0701 12:26:03.807898  653531 pod_ready.go:81] duration metric: took 400.000751ms for pod "kube-proxy-lphzn" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:03.807907  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:04.004031  653531 request.go:629] Waited for 196.031388ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960
	I0701 12:26:04.004089  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960
	I0701 12:26:04.004095  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:04.004107  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:04.004115  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:04.007598  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:04.204058  653531 request.go:629] Waited for 195.850938ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:26:04.204148  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:26:04.204158  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:04.204172  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:04.204181  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:04.207457  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:04.208086  653531 pod_ready.go:92] pod "kube-scheduler-ha-735960" in "kube-system" namespace has status "Ready":"True"
	I0701 12:26:04.208102  653531 pod_ready.go:81] duration metric: took 400.189366ms for pod "kube-scheduler-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:04.208112  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:04.403245  653531 request.go:629] Waited for 195.048743ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960-m02
	I0701 12:26:04.403318  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960-m02
	I0701 12:26:04.403323  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:04.403331  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:04.403335  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:04.406662  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:04.603781  653531 request.go:629] Waited for 196.396031ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:26:04.603851  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:26:04.603858  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:04.603868  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:04.603872  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:04.607382  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:04.607837  653531 pod_ready.go:92] pod "kube-scheduler-ha-735960-m02" in "kube-system" namespace has status "Ready":"True"
	I0701 12:26:04.607857  653531 pod_ready.go:81] duration metric: took 399.737176ms for pod "kube-scheduler-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:04.607869  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:04.803931  653531 request.go:629] Waited for 195.967281ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960-m03
	I0701 12:26:04.804004  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960-m03
	I0701 12:26:04.804010  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:04.804018  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:04.804025  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:04.807572  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:05.003764  653531 request.go:629] Waited for 195.365798ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:05.003830  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:05.003836  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:05.003844  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:05.003852  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:05.006888  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:05.007360  653531 pod_ready.go:97] node "ha-735960-m03" hosting pod "kube-scheduler-ha-735960-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-735960-m03" has status "Ready":"Unknown"
	I0701 12:26:05.007379  653531 pod_ready.go:81] duration metric: took 399.502183ms for pod "kube-scheduler-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	E0701 12:26:05.007388  653531 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-735960-m03" hosting pod "kube-scheduler-ha-735960-m03" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-735960-m03" has status "Ready":"Unknown"
	I0701 12:26:05.007396  653531 pod_ready.go:38] duration metric: took 53.305072048s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 12:26:05.007419  653531 api_server.go:52] waiting for apiserver process to appear ...
	I0701 12:26:05.007525  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 12:26:05.023687  653531 logs.go:276] 2 containers: [f615f587cb12 c36c1d459356]
	I0701 12:26:05.023779  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 12:26:05.041137  653531 logs.go:276] 2 containers: [68c63c4abd01 dff0f4abea41]
	I0701 12:26:05.041235  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 12:26:05.059910  653531 logs.go:276] 0 containers: []
	W0701 12:26:05.059939  653531 logs.go:278] No container was found matching "coredns"
	I0701 12:26:05.060005  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 12:26:05.076858  653531 logs.go:276] 2 containers: [279483668a9c 58811626a0de]
	I0701 12:26:05.076953  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 12:26:05.091973  653531 logs.go:276] 2 containers: [156169e4ac3c 2885f7cf6f93]
	I0701 12:26:05.092072  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 12:26:05.109350  653531 logs.go:276] 2 containers: [a72e102b5bf7 a1160a455902]
	I0701 12:26:05.109445  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 12:26:05.126947  653531 logs.go:276] 2 containers: [c8184f4bc096 8c3a5ac0cf85]
	I0701 12:26:05.127013  653531 logs.go:123] Gathering logs for container status ...
	I0701 12:26:05.127032  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 12:26:05.172758  653531 logs.go:123] Gathering logs for describe nodes ...
	I0701 12:26:05.172800  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 12:26:05.530082  653531 logs.go:123] Gathering logs for kube-apiserver [f615f587cb12] ...
	I0701 12:26:05.530114  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f615f587cb12"
	I0701 12:26:05.563833  653531 logs.go:123] Gathering logs for kube-apiserver [c36c1d459356] ...
	I0701 12:26:05.563866  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36c1d459356"
	I0701 12:26:05.633259  653531 logs.go:123] Gathering logs for etcd [dff0f4abea41] ...
	I0701 12:26:05.633305  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dff0f4abea41"
	I0701 12:26:05.672146  653531 logs.go:123] Gathering logs for kube-scheduler [58811626a0de] ...
	I0701 12:26:05.672187  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58811626a0de"
	I0701 12:26:05.693508  653531 logs.go:123] Gathering logs for kube-proxy [2885f7cf6f93] ...
	I0701 12:26:05.693553  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2885f7cf6f93"
	I0701 12:26:05.717857  653531 logs.go:123] Gathering logs for Docker ...
	I0701 12:26:05.717889  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 12:26:05.766696  653531 logs.go:123] Gathering logs for dmesg ...
	I0701 12:26:05.766736  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 12:26:05.781553  653531 logs.go:123] Gathering logs for kube-proxy [156169e4ac3c] ...
	I0701 12:26:05.781587  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 156169e4ac3c"
	I0701 12:26:05.807724  653531 logs.go:123] Gathering logs for kindnet [8c3a5ac0cf85] ...
	I0701 12:26:05.807758  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3a5ac0cf85"
	I0701 12:26:05.830042  653531 logs.go:123] Gathering logs for etcd [68c63c4abd01] ...
	I0701 12:26:05.830072  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68c63c4abd01"
	I0701 12:26:05.862525  653531 logs.go:123] Gathering logs for kube-controller-manager [a72e102b5bf7] ...
	I0701 12:26:05.862568  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a72e102b5bf7"
	I0701 12:26:05.901329  653531 logs.go:123] Gathering logs for kube-controller-manager [a1160a455902] ...
	I0701 12:26:05.901370  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1160a455902"
	I0701 12:26:05.942097  653531 logs.go:123] Gathering logs for kindnet [c8184f4bc096] ...
	I0701 12:26:05.942139  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8184f4bc096"
	I0701 12:26:05.964792  653531 logs.go:123] Gathering logs for kubelet ...
	I0701 12:26:05.964829  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 12:26:06.027347  653531 logs.go:123] Gathering logs for kube-scheduler [279483668a9c] ...
	I0701 12:26:06.027394  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 279483668a9c"
	I0701 12:26:08.550396  653531 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 12:26:08.565837  653531 api_server.go:72] duration metric: took 1m18.553699317s to wait for apiserver process to appear ...
	I0701 12:26:08.565866  653531 api_server.go:88] waiting for apiserver healthz status ...
	I0701 12:26:08.565941  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 12:26:08.584274  653531 logs.go:276] 2 containers: [f615f587cb12 c36c1d459356]
	I0701 12:26:08.584349  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 12:26:08.601551  653531 logs.go:276] 2 containers: [68c63c4abd01 dff0f4abea41]
	I0701 12:26:08.601633  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 12:26:08.619657  653531 logs.go:276] 0 containers: []
	W0701 12:26:08.619687  653531 logs.go:278] No container was found matching "coredns"
	I0701 12:26:08.619744  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 12:26:08.637393  653531 logs.go:276] 2 containers: [279483668a9c 58811626a0de]
	I0701 12:26:08.637473  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 12:26:08.662222  653531 logs.go:276] 2 containers: [156169e4ac3c 2885f7cf6f93]
	I0701 12:26:08.662307  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 12:26:08.678542  653531 logs.go:276] 2 containers: [a72e102b5bf7 a1160a455902]
	I0701 12:26:08.678649  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 12:26:08.698914  653531 logs.go:276] 2 containers: [c8184f4bc096 8c3a5ac0cf85]
	I0701 12:26:08.698956  653531 logs.go:123] Gathering logs for kube-scheduler [58811626a0de] ...
	I0701 12:26:08.698968  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58811626a0de"
	I0701 12:26:08.722744  653531 logs.go:123] Gathering logs for kube-controller-manager [a72e102b5bf7] ...
	I0701 12:26:08.722780  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a72e102b5bf7"
	I0701 12:26:08.767782  653531 logs.go:123] Gathering logs for kindnet [8c3a5ac0cf85] ...
	I0701 12:26:08.767825  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3a5ac0cf85"
	I0701 12:26:08.792700  653531 logs.go:123] Gathering logs for Docker ...
	I0701 12:26:08.792731  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 12:26:08.841902  653531 logs.go:123] Gathering logs for container status ...
	I0701 12:26:08.841943  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 12:26:08.885531  653531 logs.go:123] Gathering logs for kubelet ...
	I0701 12:26:08.885563  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 12:26:08.940130  653531 logs.go:123] Gathering logs for etcd [68c63c4abd01] ...
	I0701 12:26:08.940179  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68c63c4abd01"
	I0701 12:26:08.973841  653531 logs.go:123] Gathering logs for etcd [dff0f4abea41] ...
	I0701 12:26:08.973883  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dff0f4abea41"
	I0701 12:26:09.008785  653531 logs.go:123] Gathering logs for kube-apiserver [f615f587cb12] ...
	I0701 12:26:09.008824  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f615f587cb12"
	I0701 12:26:09.040512  653531 logs.go:123] Gathering logs for kube-apiserver [c36c1d459356] ...
	I0701 12:26:09.040568  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36c1d459356"
	I0701 12:26:09.135818  653531 logs.go:123] Gathering logs for kube-scheduler [279483668a9c] ...
	I0701 12:26:09.135876  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 279483668a9c"
	I0701 12:26:09.158758  653531 logs.go:123] Gathering logs for describe nodes ...
	I0701 12:26:09.158802  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 12:26:09.415637  653531 logs.go:123] Gathering logs for kube-proxy [2885f7cf6f93] ...
	I0701 12:26:09.415685  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2885f7cf6f93"
	I0701 12:26:09.438064  653531 logs.go:123] Gathering logs for kindnet [c8184f4bc096] ...
	I0701 12:26:09.438104  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8184f4bc096"
	I0701 12:26:09.463612  653531 logs.go:123] Gathering logs for dmesg ...
	I0701 12:26:09.463666  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 12:26:09.477906  653531 logs.go:123] Gathering logs for kube-proxy [156169e4ac3c] ...
	I0701 12:26:09.477936  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 156169e4ac3c"
	I0701 12:26:09.501662  653531 logs.go:123] Gathering logs for kube-controller-manager [a1160a455902] ...
	I0701 12:26:09.501704  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1160a455902"
	I0701 12:26:12.049246  653531 api_server.go:253] Checking apiserver healthz at https://192.168.39.16:8443/healthz ...
	I0701 12:26:12.055739  653531 api_server.go:279] https://192.168.39.16:8443/healthz returned 200:
	ok
	I0701 12:26:12.055824  653531 round_trippers.go:463] GET https://192.168.39.16:8443/version
	I0701 12:26:12.055829  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:12.055837  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:12.055841  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:12.056892  653531 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0701 12:26:12.057034  653531 api_server.go:141] control plane version: v1.30.2
	I0701 12:26:12.057055  653531 api_server.go:131] duration metric: took 3.491183076s to wait for apiserver health ...
	I0701 12:26:12.057064  653531 system_pods.go:43] waiting for kube-system pods to appear ...
	I0701 12:26:12.057160  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0701 12:26:12.074309  653531 logs.go:276] 2 containers: [f615f587cb12 c36c1d459356]
	I0701 12:26:12.074405  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0701 12:26:12.100040  653531 logs.go:276] 2 containers: [68c63c4abd01 dff0f4abea41]
	I0701 12:26:12.100116  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0701 12:26:12.119321  653531 logs.go:276] 0 containers: []
	W0701 12:26:12.119352  653531 logs.go:278] No container was found matching "coredns"
	I0701 12:26:12.119406  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0701 12:26:12.137547  653531 logs.go:276] 2 containers: [279483668a9c 58811626a0de]
	I0701 12:26:12.137660  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0701 12:26:12.157321  653531 logs.go:276] 2 containers: [156169e4ac3c 2885f7cf6f93]
	I0701 12:26:12.157417  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0701 12:26:12.182117  653531 logs.go:276] 2 containers: [a72e102b5bf7 a1160a455902]
	I0701 12:26:12.182204  653531 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0701 12:26:12.204201  653531 logs.go:276] 2 containers: [c8184f4bc096 8c3a5ac0cf85]
	I0701 12:26:12.204247  653531 logs.go:123] Gathering logs for kube-proxy [2885f7cf6f93] ...
	I0701 12:26:12.204260  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2885f7cf6f93"
	I0701 12:26:12.228173  653531 logs.go:123] Gathering logs for kube-controller-manager [a72e102b5bf7] ...
	I0701 12:26:12.228206  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a72e102b5bf7"
	I0701 12:26:12.267264  653531 logs.go:123] Gathering logs for kindnet [c8184f4bc096] ...
	I0701 12:26:12.267309  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8184f4bc096"
	I0701 12:26:12.294504  653531 logs.go:123] Gathering logs for Docker ...
	I0701 12:26:12.294535  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0701 12:26:12.344610  653531 logs.go:123] Gathering logs for describe nodes ...
	I0701 12:26:12.344649  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 12:26:12.593887  653531 logs.go:123] Gathering logs for kube-apiserver [c36c1d459356] ...
	I0701 12:26:12.593927  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36c1d459356"
	I0701 12:26:12.665033  653531 logs.go:123] Gathering logs for kube-proxy [156169e4ac3c] ...
	I0701 12:26:12.665082  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 156169e4ac3c"
	I0701 12:26:12.687103  653531 logs.go:123] Gathering logs for container status ...
	I0701 12:26:12.687142  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 12:26:12.735851  653531 logs.go:123] Gathering logs for kubelet ...
	I0701 12:26:12.735886  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 12:26:12.793127  653531 logs.go:123] Gathering logs for kube-apiserver [f615f587cb12] ...
	I0701 12:26:12.793168  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f615f587cb12"
	I0701 12:26:12.823004  653531 logs.go:123] Gathering logs for kindnet [8c3a5ac0cf85] ...
	I0701 12:26:12.823037  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3a5ac0cf85"
	I0701 12:26:12.862610  653531 logs.go:123] Gathering logs for kube-scheduler [279483668a9c] ...
	I0701 12:26:12.862650  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 279483668a9c"
	I0701 12:26:12.883651  653531 logs.go:123] Gathering logs for kube-scheduler [58811626a0de] ...
	I0701 12:26:12.883685  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58811626a0de"
	I0701 12:26:12.905351  653531 logs.go:123] Gathering logs for kube-controller-manager [a1160a455902] ...
	I0701 12:26:12.905388  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1160a455902"
	I0701 12:26:12.938388  653531 logs.go:123] Gathering logs for dmesg ...
	I0701 12:26:12.938427  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 12:26:12.955609  653531 logs.go:123] Gathering logs for etcd [68c63c4abd01] ...
	I0701 12:26:12.955647  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68c63c4abd01"
	I0701 12:26:12.987593  653531 logs.go:123] Gathering logs for etcd [dff0f4abea41] ...
	I0701 12:26:12.987626  653531 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dff0f4abea41"
	I0701 12:26:15.520590  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods
	I0701 12:26:15.520616  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:15.520625  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:15.520628  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:15.528299  653531 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0701 12:26:15.535569  653531 system_pods.go:59] 26 kube-system pods found
	I0701 12:26:15.535603  653531 system_pods.go:61] "coredns-7db6d8ff4d-nk4lf" [c03dd635-a82d-4f18-bd72-ec575f91867e] Running
	I0701 12:26:15.535608  653531 system_pods.go:61] "coredns-7db6d8ff4d-p4rtz" [267efba7-bf34-48d5-ab15-5bda45ff2f4f] Running
	I0701 12:26:15.535613  653531 system_pods.go:61] "etcd-ha-735960" [4b98745c-292f-42b5-977c-69c50fd241f1] Running
	I0701 12:26:15.535617  653531 system_pods.go:61] "etcd-ha-735960-m02" [fed8cdfa-8428-47e0-84ef-05297ad232f8] Running
	I0701 12:26:15.535622  653531 system_pods.go:61] "etcd-ha-735960-m03" [50b07bc3-ff6b-487d-8654-901d96892868] Running
	I0701 12:26:15.535625  653531 system_pods.go:61] "kindnet-2424m" [aa18d5dd-f6eb-4f04-a61e-b0b257e214af] Running
	I0701 12:26:15.535628  653531 system_pods.go:61] "kindnet-6gx8s" [7f46a773-a075-476c-9e54-89f125b4b57a] Running
	I0701 12:26:15.535631  653531 system_pods.go:61] "kindnet-7f6hm" [a8c302b4-1163-4d4f-bfe3-4fd3b5d23cf0] Running
	I0701 12:26:15.535633  653531 system_pods.go:61] "kindnet-bztzv" [7afa0e45-3d10-40bc-b422-7005a3ca9d3a] Running
	I0701 12:26:15.535636  653531 system_pods.go:61] "kube-apiserver-ha-735960" [ad041aaa-465a-4d8a-a8dc-b7665e1d587d] Running
	I0701 12:26:15.535640  653531 system_pods.go:61] "kube-apiserver-ha-735960-m02" [ba28f48e-1c18-47e3-ab11-a9b5588c5c32] Running
	I0701 12:26:15.535642  653531 system_pods.go:61] "kube-apiserver-ha-735960-m03" [baafa3bf-78ee-4269-9591-b0440927e055] Running
	I0701 12:26:15.535646  653531 system_pods.go:61] "kube-controller-manager-ha-735960" [3f0f0cf5-329d-47bc-b922-7583902e2607] Running
	I0701 12:26:15.535649  653531 system_pods.go:61] "kube-controller-manager-ha-735960-m02" [258fde18-ac5c-4446-842b-9465529b154c] Running
	I0701 12:26:15.535652  653531 system_pods.go:61] "kube-controller-manager-ha-735960-m03" [79acc56b-a9e0-4d4b-bc64-1a3a36ddf051] Running
	I0701 12:26:15.535655  653531 system_pods.go:61] "kube-proxy-25ssf" [11f0dc23-ab9d-4d39-988d-4c44dfde86cd] Running
	I0701 12:26:15.535658  653531 system_pods.go:61] "kube-proxy-776rt" [5666dac7-924e-4429-bd1d-a1a5647cc611] Running
	I0701 12:26:15.535661  653531 system_pods.go:61] "kube-proxy-b6knb" [eb36e930-5799-4ff7-821a-ccb22303cd1b] Running
	I0701 12:26:15.535664  653531 system_pods.go:61] "kube-proxy-lphzn" [0761a7a6-740e-4cde-9ab5-e02e8d417907] Running
	I0701 12:26:15.535667  653531 system_pods.go:61] "kube-scheduler-ha-735960" [c624cf42-a7d6-4aaf-859d-1aeaf29f9acb] Running
	I0701 12:26:15.535670  653531 system_pods.go:61] "kube-scheduler-ha-735960-m02" [7de78af7-2d79-46dc-bd34-f221d79fde06] Running
	I0701 12:26:15.535673  653531 system_pods.go:61] "kube-scheduler-ha-735960-m03" [9f9a2030-9332-44af-b8dc-3b4609e53f91] Running
	I0701 12:26:15.535676  653531 system_pods.go:61] "kube-vip-ha-735960" [4299679a-c145-4f4f-8ec6-3cd468b98ef1] Running
	I0701 12:26:15.535679  653531 system_pods.go:61] "kube-vip-ha-735960-m02" [1c9b13e1-515c-43c0-8d99-5ad1c1807727] Running
	I0701 12:26:15.535684  653531 system_pods.go:61] "kube-vip-ha-735960-m03" [7069ea7c-5461-4fe6-a969-97fe33396ebb] Running
	I0701 12:26:15.535688  653531 system_pods.go:61] "storage-provisioner" [f5c4f7f9-d648-4019-a5ea-6ce59f6c5663] Running
	I0701 12:26:15.535693  653531 system_pods.go:74] duration metric: took 3.47862483s to wait for pod list to return data ...
	I0701 12:26:15.535701  653531 default_sa.go:34] waiting for default service account to be created ...
	I0701 12:26:15.535798  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/default/serviceaccounts
	I0701 12:26:15.535809  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:15.535816  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:15.535820  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:15.539198  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:15.539410  653531 default_sa.go:45] found service account: "default"
	I0701 12:26:15.539425  653531 default_sa.go:55] duration metric: took 3.71568ms for default service account to be created ...
	I0701 12:26:15.539433  653531 system_pods.go:116] waiting for k8s-apps to be running ...
	I0701 12:26:15.539483  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods
	I0701 12:26:15.539490  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:15.539497  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:15.539503  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:15.547242  653531 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0701 12:26:15.553992  653531 system_pods.go:86] 26 kube-system pods found
	I0701 12:26:15.554026  653531 system_pods.go:89] "coredns-7db6d8ff4d-nk4lf" [c03dd635-a82d-4f18-bd72-ec575f91867e] Running
	I0701 12:26:15.554034  653531 system_pods.go:89] "coredns-7db6d8ff4d-p4rtz" [267efba7-bf34-48d5-ab15-5bda45ff2f4f] Running
	I0701 12:26:15.554040  653531 system_pods.go:89] "etcd-ha-735960" [4b98745c-292f-42b5-977c-69c50fd241f1] Running
	I0701 12:26:15.554046  653531 system_pods.go:89] "etcd-ha-735960-m02" [fed8cdfa-8428-47e0-84ef-05297ad232f8] Running
	I0701 12:26:15.554050  653531 system_pods.go:89] "etcd-ha-735960-m03" [50b07bc3-ff6b-487d-8654-901d96892868] Running
	I0701 12:26:15.554056  653531 system_pods.go:89] "kindnet-2424m" [aa18d5dd-f6eb-4f04-a61e-b0b257e214af] Running
	I0701 12:26:15.554062  653531 system_pods.go:89] "kindnet-6gx8s" [7f46a773-a075-476c-9e54-89f125b4b57a] Running
	I0701 12:26:15.554069  653531 system_pods.go:89] "kindnet-7f6hm" [a8c302b4-1163-4d4f-bfe3-4fd3b5d23cf0] Running
	I0701 12:26:15.554075  653531 system_pods.go:89] "kindnet-bztzv" [7afa0e45-3d10-40bc-b422-7005a3ca9d3a] Running
	I0701 12:26:15.554081  653531 system_pods.go:89] "kube-apiserver-ha-735960" [ad041aaa-465a-4d8a-a8dc-b7665e1d587d] Running
	I0701 12:26:15.554088  653531 system_pods.go:89] "kube-apiserver-ha-735960-m02" [ba28f48e-1c18-47e3-ab11-a9b5588c5c32] Running
	I0701 12:26:15.554099  653531 system_pods.go:89] "kube-apiserver-ha-735960-m03" [baafa3bf-78ee-4269-9591-b0440927e055] Running
	I0701 12:26:15.554107  653531 system_pods.go:89] "kube-controller-manager-ha-735960" [3f0f0cf5-329d-47bc-b922-7583902e2607] Running
	I0701 12:26:15.554115  653531 system_pods.go:89] "kube-controller-manager-ha-735960-m02" [258fde18-ac5c-4446-842b-9465529b154c] Running
	I0701 12:26:15.554123  653531 system_pods.go:89] "kube-controller-manager-ha-735960-m03" [79acc56b-a9e0-4d4b-bc64-1a3a36ddf051] Running
	I0701 12:26:15.554131  653531 system_pods.go:89] "kube-proxy-25ssf" [11f0dc23-ab9d-4d39-988d-4c44dfde86cd] Running
	I0701 12:26:15.554140  653531 system_pods.go:89] "kube-proxy-776rt" [5666dac7-924e-4429-bd1d-a1a5647cc611] Running
	I0701 12:26:15.554148  653531 system_pods.go:89] "kube-proxy-b6knb" [eb36e930-5799-4ff7-821a-ccb22303cd1b] Running
	I0701 12:26:15.554163  653531 system_pods.go:89] "kube-proxy-lphzn" [0761a7a6-740e-4cde-9ab5-e02e8d417907] Running
	I0701 12:26:15.554170  653531 system_pods.go:89] "kube-scheduler-ha-735960" [c624cf42-a7d6-4aaf-859d-1aeaf29f9acb] Running
	I0701 12:26:15.554176  653531 system_pods.go:89] "kube-scheduler-ha-735960-m02" [7de78af7-2d79-46dc-bd34-f221d79fde06] Running
	I0701 12:26:15.554183  653531 system_pods.go:89] "kube-scheduler-ha-735960-m03" [9f9a2030-9332-44af-b8dc-3b4609e53f91] Running
	I0701 12:26:15.554192  653531 system_pods.go:89] "kube-vip-ha-735960" [4299679a-c145-4f4f-8ec6-3cd468b98ef1] Running
	I0701 12:26:15.554199  653531 system_pods.go:89] "kube-vip-ha-735960-m02" [1c9b13e1-515c-43c0-8d99-5ad1c1807727] Running
	I0701 12:26:15.554207  653531 system_pods.go:89] "kube-vip-ha-735960-m03" [7069ea7c-5461-4fe6-a969-97fe33396ebb] Running
	I0701 12:26:15.554216  653531 system_pods.go:89] "storage-provisioner" [f5c4f7f9-d648-4019-a5ea-6ce59f6c5663] Running
	I0701 12:26:15.554229  653531 system_pods.go:126] duration metric: took 14.787055ms to wait for k8s-apps to be running ...
	I0701 12:26:15.554241  653531 system_svc.go:44] waiting for kubelet service to be running ....
	I0701 12:26:15.554296  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 12:26:15.567890  653531 system_svc.go:56] duration metric: took 13.638054ms WaitForService to wait for kubelet
	I0701 12:26:15.567925  653531 kubeadm.go:576] duration metric: took 1m25.555790211s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 12:26:15.567951  653531 node_conditions.go:102] verifying NodePressure condition ...
	I0701 12:26:15.568047  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes
	I0701 12:26:15.568057  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:15.568067  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:15.568074  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:15.575311  653531 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0701 12:26:15.577277  653531 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0701 12:26:15.577310  653531 node_conditions.go:123] node cpu capacity is 2
	I0701 12:26:15.577328  653531 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0701 12:26:15.577334  653531 node_conditions.go:123] node cpu capacity is 2
	I0701 12:26:15.577339  653531 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0701 12:26:15.577343  653531 node_conditions.go:123] node cpu capacity is 2
	I0701 12:26:15.577348  653531 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0701 12:26:15.577352  653531 node_conditions.go:123] node cpu capacity is 2
	I0701 12:26:15.577358  653531 node_conditions.go:105] duration metric: took 9.401356ms to run NodePressure ...
	I0701 12:26:15.577372  653531 start.go:240] waiting for startup goroutines ...
	I0701 12:26:15.577418  653531 start.go:254] writing updated cluster config ...
	I0701 12:26:15.579876  653531 out.go:177] 
	I0701 12:26:15.581466  653531 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:26:15.581562  653531 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
	I0701 12:26:15.583519  653531 out.go:177] * Starting "ha-735960-m03" control-plane node in "ha-735960" cluster
	I0701 12:26:15.584707  653531 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 12:26:15.584732  653531 cache.go:56] Caching tarball of preloaded images
	I0701 12:26:15.584831  653531 preload.go:173] Found /home/jenkins/minikube-integration/19166-630650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0701 12:26:15.584841  653531 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0701 12:26:15.584932  653531 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
	I0701 12:26:15.585716  653531 start.go:360] acquireMachinesLock for ha-735960-m03: {Name:mk43a6c0c0c15a237623df377ad65b82186c0ad8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:26:15.585768  653531 start.go:364] duration metric: took 28.47µs to acquireMachinesLock for "ha-735960-m03"
	I0701 12:26:15.585785  653531 start.go:96] Skipping create...Using existing machine configuration
	I0701 12:26:15.585798  653531 fix.go:54] fixHost starting: m03
	I0701 12:26:15.586107  653531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:26:15.586143  653531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:26:15.603500  653531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43455
	I0701 12:26:15.603962  653531 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:26:15.604555  653531 main.go:141] libmachine: Using API Version  1
	I0701 12:26:15.604579  653531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:26:15.604930  653531 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:26:15.605195  653531 main.go:141] libmachine: (ha-735960-m03) Calling .DriverName
	I0701 12:26:15.605384  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetState
	I0701 12:26:15.607018  653531 fix.go:112] recreateIfNeeded on ha-735960-m03: state=Stopped err=<nil>
	I0701 12:26:15.607042  653531 main.go:141] libmachine: (ha-735960-m03) Calling .DriverName
	W0701 12:26:15.607213  653531 fix.go:138] unexpected machine state, will restart: <nil>
	I0701 12:26:15.609349  653531 out.go:177] * Restarting existing kvm2 VM for "ha-735960-m03" ...
	I0701 12:26:15.610714  653531 main.go:141] libmachine: (ha-735960-m03) Calling .Start
	I0701 12:26:15.610921  653531 main.go:141] libmachine: (ha-735960-m03) Ensuring networks are active...
	I0701 12:26:15.611706  653531 main.go:141] libmachine: (ha-735960-m03) Ensuring network default is active
	I0701 12:26:15.612087  653531 main.go:141] libmachine: (ha-735960-m03) Ensuring network mk-ha-735960 is active
	I0701 12:26:15.612457  653531 main.go:141] libmachine: (ha-735960-m03) Getting domain xml...
	I0701 12:26:15.613082  653531 main.go:141] libmachine: (ha-735960-m03) Creating domain...
	I0701 12:26:16.855928  653531 main.go:141] libmachine: (ha-735960-m03) Waiting to get IP...
	I0701 12:26:16.856767  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:16.857131  653531 main.go:141] libmachine: (ha-735960-m03) DBG | unable to find current IP address of domain ha-735960-m03 in network mk-ha-735960
	I0701 12:26:16.857182  653531 main.go:141] libmachine: (ha-735960-m03) DBG | I0701 12:26:16.857114  654164 retry.go:31] will retry after 232.687433ms: waiting for machine to come up
	I0701 12:26:17.091660  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:17.092187  653531 main.go:141] libmachine: (ha-735960-m03) DBG | unable to find current IP address of domain ha-735960-m03 in network mk-ha-735960
	I0701 12:26:17.092229  653531 main.go:141] libmachine: (ha-735960-m03) DBG | I0701 12:26:17.092112  654164 retry.go:31] will retry after 320.051772ms: waiting for machine to come up
	I0701 12:26:17.413613  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:17.414125  653531 main.go:141] libmachine: (ha-735960-m03) DBG | unable to find current IP address of domain ha-735960-m03 in network mk-ha-735960
	I0701 12:26:17.414157  653531 main.go:141] libmachine: (ha-735960-m03) DBG | I0701 12:26:17.414063  654164 retry.go:31] will retry after 415.446228ms: waiting for machine to come up
	I0701 12:26:17.830725  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:17.831413  653531 main.go:141] libmachine: (ha-735960-m03) DBG | unable to find current IP address of domain ha-735960-m03 in network mk-ha-735960
	I0701 12:26:17.831445  653531 main.go:141] libmachine: (ha-735960-m03) DBG | I0701 12:26:17.831349  654164 retry.go:31] will retry after 522.707968ms: waiting for machine to come up
	I0701 12:26:18.356092  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:18.356521  653531 main.go:141] libmachine: (ha-735960-m03) DBG | unable to find current IP address of domain ha-735960-m03 in network mk-ha-735960
	I0701 12:26:18.356543  653531 main.go:141] libmachine: (ha-735960-m03) DBG | I0701 12:26:18.356485  654164 retry.go:31] will retry after 572.783424ms: waiting for machine to come up
	I0701 12:26:18.931377  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:18.931831  653531 main.go:141] libmachine: (ha-735960-m03) DBG | unable to find current IP address of domain ha-735960-m03 in network mk-ha-735960
	I0701 12:26:18.931856  653531 main.go:141] libmachine: (ha-735960-m03) DBG | I0701 12:26:18.931778  654164 retry.go:31] will retry after 662.269299ms: waiting for machine to come up
	I0701 12:26:19.595406  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:19.595831  653531 main.go:141] libmachine: (ha-735960-m03) DBG | unable to find current IP address of domain ha-735960-m03 in network mk-ha-735960
	I0701 12:26:19.595862  653531 main.go:141] libmachine: (ha-735960-m03) DBG | I0701 12:26:19.595779  654164 retry.go:31] will retry after 965.977644ms: waiting for machine to come up
	I0701 12:26:20.562930  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:20.563372  653531 main.go:141] libmachine: (ha-735960-m03) DBG | unable to find current IP address of domain ha-735960-m03 in network mk-ha-735960
	I0701 12:26:20.563432  653531 main.go:141] libmachine: (ha-735960-m03) DBG | I0701 12:26:20.563328  654164 retry.go:31] will retry after 1.166893605s: waiting for machine to come up
	I0701 12:26:21.731632  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:21.732082  653531 main.go:141] libmachine: (ha-735960-m03) DBG | unable to find current IP address of domain ha-735960-m03 in network mk-ha-735960
	I0701 12:26:21.732114  653531 main.go:141] libmachine: (ha-735960-m03) DBG | I0701 12:26:21.732040  654164 retry.go:31] will retry after 1.800222328s: waiting for machine to come up
	I0701 12:26:23.534948  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:23.535342  653531 main.go:141] libmachine: (ha-735960-m03) DBG | unable to find current IP address of domain ha-735960-m03 in network mk-ha-735960
	I0701 12:26:23.535372  653531 main.go:141] libmachine: (ha-735960-m03) DBG | I0701 12:26:23.535277  654164 retry.go:31] will retry after 1.820829305s: waiting for machine to come up
	I0701 12:26:25.357271  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:25.357677  653531 main.go:141] libmachine: (ha-735960-m03) DBG | unable to find current IP address of domain ha-735960-m03 in network mk-ha-735960
	I0701 12:26:25.357701  653531 main.go:141] libmachine: (ha-735960-m03) DBG | I0701 12:26:25.357630  654164 retry.go:31] will retry after 1.816274117s: waiting for machine to come up
	I0701 12:26:27.176155  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:27.176621  653531 main.go:141] libmachine: (ha-735960-m03) DBG | unable to find current IP address of domain ha-735960-m03 in network mk-ha-735960
	I0701 12:26:27.176653  653531 main.go:141] libmachine: (ha-735960-m03) DBG | I0701 12:26:27.176598  654164 retry.go:31] will retry after 2.782602178s: waiting for machine to come up
	I0701 12:26:29.960991  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:29.961388  653531 main.go:141] libmachine: (ha-735960-m03) DBG | unable to find current IP address of domain ha-735960-m03 in network mk-ha-735960
	I0701 12:26:29.961421  653531 main.go:141] libmachine: (ha-735960-m03) DBG | I0701 12:26:29.961334  654164 retry.go:31] will retry after 3.816886888s: waiting for machine to come up
	I0701 12:26:33.779810  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:33.780404  653531 main.go:141] libmachine: (ha-735960-m03) Found IP for machine: 192.168.39.97
	I0701 12:26:33.780436  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has current primary IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:33.780448  653531 main.go:141] libmachine: (ha-735960-m03) Reserving static IP address...
	I0701 12:26:33.780953  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "ha-735960-m03", mac: "52:54:00:93:88:f2", ip: "192.168.39.97"} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:33.780975  653531 main.go:141] libmachine: (ha-735960-m03) DBG | skip adding static IP to network mk-ha-735960 - found existing host DHCP lease matching {name: "ha-735960-m03", mac: "52:54:00:93:88:f2", ip: "192.168.39.97"}
	I0701 12:26:33.780986  653531 main.go:141] libmachine: (ha-735960-m03) Reserved static IP address: 192.168.39.97
	I0701 12:26:33.780995  653531 main.go:141] libmachine: (ha-735960-m03) Waiting for SSH to be available...
	I0701 12:26:33.781005  653531 main.go:141] libmachine: (ha-735960-m03) DBG | Getting to WaitForSSH function...
	I0701 12:26:33.783239  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:33.783609  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:33.783636  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:33.783742  653531 main.go:141] libmachine: (ha-735960-m03) DBG | Using SSH client type: external
	I0701 12:26:33.783770  653531 main.go:141] libmachine: (ha-735960-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m03/id_rsa (-rw-------)
	I0701 12:26:33.783810  653531 main.go:141] libmachine: (ha-735960-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.97 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0701 12:26:33.783825  653531 main.go:141] libmachine: (ha-735960-m03) DBG | About to run SSH command:
	I0701 12:26:33.783839  653531 main.go:141] libmachine: (ha-735960-m03) DBG | exit 0
	I0701 12:26:33.906528  653531 main.go:141] libmachine: (ha-735960-m03) DBG | SSH cmd err, output: <nil>: 
	I0701 12:26:33.906854  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetConfigRaw
	I0701 12:26:33.907659  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetIP
	I0701 12:26:33.910504  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:33.910919  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:33.910952  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:33.911199  653531 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
	I0701 12:26:33.911468  653531 machine.go:94] provisionDockerMachine start ...
	I0701 12:26:33.911493  653531 main.go:141] libmachine: (ha-735960-m03) Calling .DriverName
	I0701 12:26:33.911726  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHHostname
	I0701 12:26:33.913742  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:33.914049  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:33.914079  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:33.914213  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHPort
	I0701 12:26:33.914440  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:33.914614  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:33.914781  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHUsername
	I0701 12:26:33.914952  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:26:33.915169  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0701 12:26:33.915186  653531 main.go:141] libmachine: About to run SSH command:
	hostname
	I0701 12:26:34.022720  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0701 12:26:34.022751  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetMachineName
	I0701 12:26:34.023048  653531 buildroot.go:166] provisioning hostname "ha-735960-m03"
	I0701 12:26:34.023086  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetMachineName
	I0701 12:26:34.023302  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHHostname
	I0701 12:26:34.026253  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:34.026699  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:34.026731  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:34.026891  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHPort
	I0701 12:26:34.027100  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:34.027330  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:34.027468  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHUsername
	I0701 12:26:34.027637  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:26:34.027853  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0701 12:26:34.027872  653531 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-735960-m03 && echo "ha-735960-m03" | sudo tee /etc/hostname
	I0701 12:26:34.143884  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-735960-m03
	
	I0701 12:26:34.143919  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHHostname
	I0701 12:26:34.146876  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:34.147233  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:34.147259  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:34.147410  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHPort
	I0701 12:26:34.147595  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:34.147764  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:34.147906  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHUsername
	I0701 12:26:34.148107  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:26:34.148271  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0701 12:26:34.148287  653531 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-735960-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-735960-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-735960-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0701 12:26:34.259290  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0701 12:26:34.259326  653531 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19166-630650/.minikube CaCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19166-630650/.minikube}
	I0701 12:26:34.259348  653531 buildroot.go:174] setting up certificates
	I0701 12:26:34.259361  653531 provision.go:84] configureAuth start
	I0701 12:26:34.259373  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetMachineName
	I0701 12:26:34.259700  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetIP
	I0701 12:26:34.262660  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:34.263056  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:34.263088  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:34.263229  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHHostname
	I0701 12:26:34.265709  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:34.266104  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:34.266129  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:34.266291  653531 provision.go:143] copyHostCerts
	I0701 12:26:34.266320  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem
	I0701 12:26:34.266385  653531 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem, removing ...
	I0701 12:26:34.266399  653531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem
	I0701 12:26:34.266510  653531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem (1078 bytes)
	I0701 12:26:34.266616  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem
	I0701 12:26:34.266642  653531 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem, removing ...
	I0701 12:26:34.266651  653531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem
	I0701 12:26:34.266687  653531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem (1123 bytes)
	I0701 12:26:34.266758  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem
	I0701 12:26:34.266785  653531 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem, removing ...
	I0701 12:26:34.266794  653531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem
	I0701 12:26:34.266828  653531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem (1675 bytes)
	I0701 12:26:34.266895  653531 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem org=jenkins.ha-735960-m03 san=[127.0.0.1 192.168.39.97 ha-735960-m03 localhost minikube]
	I0701 12:26:34.565581  653531 provision.go:177] copyRemoteCerts
	I0701 12:26:34.565649  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0701 12:26:34.565676  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHHostname
	I0701 12:26:34.568539  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:34.568839  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:34.568870  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:34.569025  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHPort
	I0701 12:26:34.569261  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:34.569428  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHUsername
	I0701 12:26:34.569588  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m03/id_rsa Username:docker}
	I0701 12:26:34.652136  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0701 12:26:34.652230  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0701 12:26:34.676227  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0701 12:26:34.676305  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0701 12:26:34.699234  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0701 12:26:34.699313  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0701 12:26:34.721885  653531 provision.go:87] duration metric: took 462.509686ms to configureAuth
	I0701 12:26:34.721915  653531 buildroot.go:189] setting minikube options for container-runtime
	I0701 12:26:34.722137  653531 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:26:34.722181  653531 main.go:141] libmachine: (ha-735960-m03) Calling .DriverName
	I0701 12:26:34.722494  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHHostname
	I0701 12:26:34.725227  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:34.725601  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:34.725629  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:34.725789  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHPort
	I0701 12:26:34.725994  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:34.726175  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:34.726384  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHUsername
	I0701 12:26:34.726572  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:26:34.726794  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0701 12:26:34.726809  653531 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0701 12:26:34.831674  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0701 12:26:34.831699  653531 buildroot.go:70] root file system type: tmpfs
	I0701 12:26:34.831846  653531 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0701 12:26:34.831923  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHHostname
	I0701 12:26:34.835107  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:34.835603  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:34.835626  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:34.835928  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHPort
	I0701 12:26:34.836184  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:34.836401  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:34.836577  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHUsername
	I0701 12:26:34.836754  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:26:34.836963  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0701 12:26:34.837056  653531 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.39.16"
	Environment="NO_PROXY=192.168.39.16,192.168.39.86"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0701 12:26:34.951789  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.39.16
	Environment=NO_PROXY=192.168.39.16,192.168.39.86
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0701 12:26:34.951830  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHHostname
	I0701 12:26:34.954854  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:34.955349  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:34.955376  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:34.955552  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHPort
	I0701 12:26:34.955761  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:34.955952  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:34.956104  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHUsername
	I0701 12:26:34.956269  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:26:34.956436  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0701 12:26:34.956451  653531 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0701 12:26:36.820196  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0701 12:26:36.820235  653531 machine.go:97] duration metric: took 2.908749821s to provisionDockerMachine
	I0701 12:26:36.820254  653531 start.go:293] postStartSetup for "ha-735960-m03" (driver="kvm2")
	I0701 12:26:36.820269  653531 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0701 12:26:36.820322  653531 main.go:141] libmachine: (ha-735960-m03) Calling .DriverName
	I0701 12:26:36.820717  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0701 12:26:36.820758  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHHostname
	I0701 12:26:36.823679  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:36.824131  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:36.824158  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:36.824315  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHPort
	I0701 12:26:36.824557  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:36.824862  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHUsername
	I0701 12:26:36.825025  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m03/id_rsa Username:docker}
	I0701 12:26:36.909262  653531 ssh_runner.go:195] Run: cat /etc/os-release
	I0701 12:26:36.913798  653531 info.go:137] Remote host: Buildroot 2023.02.9
	I0701 12:26:36.913830  653531 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-630650/.minikube/addons for local assets ...
	I0701 12:26:36.913904  653531 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-630650/.minikube/files for local assets ...
	I0701 12:26:36.913973  653531 filesync.go:149] local asset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> 6378542.pem in /etc/ssl/certs
	I0701 12:26:36.913983  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> /etc/ssl/certs/6378542.pem
	I0701 12:26:36.914063  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0701 12:26:36.924147  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem --> /etc/ssl/certs/6378542.pem (1708 bytes)
	I0701 12:26:36.949103  653531 start.go:296] duration metric: took 128.830664ms for postStartSetup
	I0701 12:26:36.949169  653531 main.go:141] libmachine: (ha-735960-m03) Calling .DriverName
	I0701 12:26:36.949541  653531 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0701 12:26:36.949572  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHHostname
	I0701 12:26:36.952321  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:36.952670  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:36.952703  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:36.952895  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHPort
	I0701 12:26:36.953116  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:36.953299  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHUsername
	I0701 12:26:36.953494  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m03/id_rsa Username:docker}
	I0701 12:26:37.037086  653531 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0701 12:26:37.037223  653531 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0701 12:26:37.097170  653531 fix.go:56] duration metric: took 21.511363009s for fixHost
	I0701 12:26:37.097229  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHHostname
	I0701 12:26:37.100519  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:37.100936  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:37.100988  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:37.101235  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHPort
	I0701 12:26:37.101494  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:37.101681  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:37.101864  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHUsername
	I0701 12:26:37.102058  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:26:37.102248  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0701 12:26:37.102261  653531 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0701 12:26:37.210872  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719836797.190240924
	
	I0701 12:26:37.210897  653531 fix.go:216] guest clock: 1719836797.190240924
	I0701 12:26:37.210906  653531 fix.go:229] Guest: 2024-07-01 12:26:37.190240924 +0000 UTC Remote: 2024-07-01 12:26:37.09720405 +0000 UTC m=+154.567055715 (delta=93.036874ms)
	I0701 12:26:37.210928  653531 fix.go:200] guest clock delta is within tolerance: 93.036874ms
	I0701 12:26:37.210935  653531 start.go:83] releasing machines lock for "ha-735960-m03", held for 21.625157566s
	I0701 12:26:37.210966  653531 main.go:141] libmachine: (ha-735960-m03) Calling .DriverName
	I0701 12:26:37.211304  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetIP
	I0701 12:26:37.213807  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:37.214222  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:37.214255  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:37.216716  653531 out.go:177] * Found network options:
	I0701 12:26:37.218305  653531 out.go:177]   - NO_PROXY=192.168.39.16,192.168.39.86
	W0701 12:26:37.219816  653531 proxy.go:119] fail to check proxy env: Error ip not in block
	W0701 12:26:37.219845  653531 proxy.go:119] fail to check proxy env: Error ip not in block
	I0701 12:26:37.219865  653531 main.go:141] libmachine: (ha-735960-m03) Calling .DriverName
	I0701 12:26:37.220522  653531 main.go:141] libmachine: (ha-735960-m03) Calling .DriverName
	I0701 12:26:37.220737  653531 main.go:141] libmachine: (ha-735960-m03) Calling .DriverName
	I0701 12:26:37.220844  653531 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0701 12:26:37.220887  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHHostname
	W0701 12:26:37.220953  653531 proxy.go:119] fail to check proxy env: Error ip not in block
	W0701 12:26:37.220981  653531 proxy.go:119] fail to check proxy env: Error ip not in block
	I0701 12:26:37.221057  653531 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0701 12:26:37.221077  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHHostname
	I0701 12:26:37.223616  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:37.223976  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:37.224003  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:37.224022  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:37.224163  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHPort
	I0701 12:26:37.224349  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:37.224476  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:37.224495  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:37.224522  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHUsername
	I0701 12:26:37.224684  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHPort
	I0701 12:26:37.224708  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m03/id_rsa Username:docker}
	I0701 12:26:37.224822  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:26:37.224957  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHUsername
	I0701 12:26:37.225089  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m03/id_rsa Username:docker}
	W0701 12:26:37.324512  653531 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0701 12:26:37.324590  653531 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0701 12:26:37.342354  653531 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0701 12:26:37.342401  653531 start.go:494] detecting cgroup driver to use...
	I0701 12:26:37.342553  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 12:26:37.361964  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0701 12:26:37.372356  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0701 12:26:37.382741  653531 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0701 12:26:37.382800  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0701 12:26:37.393672  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0701 12:26:37.404182  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0701 12:26:37.413967  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0701 12:26:37.425102  653531 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0701 12:26:37.436486  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0701 12:26:37.448119  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0701 12:26:37.459499  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0701 12:26:37.470904  653531 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0701 12:26:37.480202  653531 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0701 12:26:37.489935  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:26:37.612275  653531 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0701 12:26:37.635575  653531 start.go:494] detecting cgroup driver to use...
	I0701 12:26:37.635692  653531 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0701 12:26:37.653571  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 12:26:37.670438  653531 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0701 12:26:37.688000  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 12:26:37.705115  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 12:26:37.718914  653531 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0701 12:26:37.744858  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 12:26:37.759980  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 12:26:37.779721  653531 ssh_runner.go:195] Run: which cri-dockerd
	I0701 12:26:37.783771  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0701 12:26:37.794141  653531 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0701 12:26:37.811510  653531 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0701 12:26:37.931976  653531 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0701 12:26:38.066164  653531 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0701 12:26:38.066230  653531 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0701 12:26:38.083572  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:26:38.206358  653531 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0701 12:26:40.648995  653531 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.442581628s)
	I0701 12:26:40.649094  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0701 12:26:40.663523  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0701 12:26:40.678231  653531 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0701 12:26:40.794839  653531 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0701 12:26:40.936707  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:26:41.068605  653531 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0701 12:26:41.086480  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0701 12:26:41.102238  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:26:41.225877  653531 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0701 12:26:41.309074  653531 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0701 12:26:41.309144  653531 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0701 12:26:41.314764  653531 start.go:562] Will wait 60s for crictl version
	I0701 12:26:41.314839  653531 ssh_runner.go:195] Run: which crictl
	I0701 12:26:41.318792  653531 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0701 12:26:41.356836  653531 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.1
	RuntimeApiVersion:  v1
	I0701 12:26:41.356927  653531 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0701 12:26:41.383790  653531 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0701 12:26:41.409143  653531 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.1 ...
	I0701 12:26:41.410603  653531 out.go:177]   - env NO_PROXY=192.168.39.16
	I0701 12:26:41.412215  653531 out.go:177]   - env NO_PROXY=192.168.39.16,192.168.39.86
	I0701 12:26:41.413404  653531 main.go:141] libmachine: (ha-735960-m03) Calling .GetIP
	I0701 12:26:41.416274  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:41.416763  653531 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:26:25 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:26:41.416796  653531 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:26:41.417070  653531 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0701 12:26:41.421392  653531 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 12:26:41.434549  653531 mustload.go:65] Loading cluster: ha-735960
	I0701 12:26:41.434797  653531 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:26:41.435079  653531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:26:41.435129  653531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:26:41.451156  653531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45677
	I0701 12:26:41.451676  653531 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:26:41.452212  653531 main.go:141] libmachine: Using API Version  1
	I0701 12:26:41.452237  653531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:26:41.452614  653531 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:26:41.452827  653531 main.go:141] libmachine: (ha-735960) Calling .GetState
	I0701 12:26:41.454575  653531 host.go:66] Checking if "ha-735960" exists ...
	I0701 12:26:41.454891  653531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:26:41.454938  653531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:26:41.471129  653531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33243
	I0701 12:26:41.471681  653531 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:26:41.472198  653531 main.go:141] libmachine: Using API Version  1
	I0701 12:26:41.472222  653531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:26:41.472612  653531 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:26:41.472844  653531 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:26:41.473032  653531 certs.go:68] Setting up /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960 for IP: 192.168.39.97
	I0701 12:26:41.473049  653531 certs.go:194] generating shared ca certs ...
	I0701 12:26:41.473074  653531 certs.go:226] acquiring lock for ca certs: {Name:mk34e166bfd069e523b2325e14d1812c523bff53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:26:41.473230  653531 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key
	I0701 12:26:41.473268  653531 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key
	I0701 12:26:41.473278  653531 certs.go:256] generating profile certs ...
	I0701 12:26:41.473349  653531 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/client.key
	I0701 12:26:41.473405  653531 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key.f1482ab5
	I0701 12:26:41.473453  653531 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.key
	I0701 12:26:41.473465  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0701 12:26:41.473478  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0701 12:26:41.473490  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0701 12:26:41.473503  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0701 12:26:41.473514  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0701 12:26:41.473528  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0701 12:26:41.473537  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0701 12:26:41.473548  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0701 12:26:41.473603  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem (1338 bytes)
	W0701 12:26:41.473630  653531 certs.go:480] ignoring /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854_empty.pem, impossibly tiny 0 bytes
	I0701 12:26:41.473639  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem (1675 bytes)
	I0701 12:26:41.473659  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem (1078 bytes)
	I0701 12:26:41.473680  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem (1123 bytes)
	I0701 12:26:41.473702  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem (1675 bytes)
	I0701 12:26:41.473736  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem (1708 bytes)
	I0701 12:26:41.473759  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem -> /usr/share/ca-certificates/637854.pem
	I0701 12:26:41.473772  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> /usr/share/ca-certificates/6378542.pem
	I0701 12:26:41.473784  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:26:41.494518  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:26:41.498371  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:26:41.498974  653531 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:24:12 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:26:41.499011  653531 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:26:41.499158  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:26:41.499416  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:26:41.499610  653531 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:26:41.499835  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa Username:docker}
	I0701 12:26:41.570757  653531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0701 12:26:41.575932  653531 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0701 12:26:41.587511  653531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0701 12:26:41.591633  653531 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0701 12:26:41.604961  653531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0701 12:26:41.609152  653531 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0701 12:26:41.619653  653531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0701 12:26:41.623572  653531 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0701 12:26:41.634171  653531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0701 12:26:41.638176  653531 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0701 12:26:41.654120  653531 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0701 12:26:41.659095  653531 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0701 12:26:41.671865  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0701 12:26:41.701740  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0701 12:26:41.726445  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0701 12:26:41.751925  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0701 12:26:41.776782  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0701 12:26:41.801611  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0701 12:26:41.825786  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0701 12:26:41.849992  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0701 12:26:41.873760  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem --> /usr/share/ca-certificates/637854.pem (1338 bytes)
	I0701 12:26:41.898685  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem --> /usr/share/ca-certificates/6378542.pem (1708 bytes)
	I0701 12:26:41.923397  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0701 12:26:41.948251  653531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0701 12:26:41.965919  653531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0701 12:26:41.982966  653531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0701 12:26:42.001626  653531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0701 12:26:42.019386  653531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0701 12:26:42.036382  653531 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0701 12:26:42.053238  653531 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0701 12:26:42.070881  653531 ssh_runner.go:195] Run: openssl version
	I0701 12:26:42.076651  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/637854.pem && ln -fs /usr/share/ca-certificates/637854.pem /etc/ssl/certs/637854.pem"
	I0701 12:26:42.087389  653531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/637854.pem
	I0701 12:26:42.093055  653531 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  1 12:11 /usr/share/ca-certificates/637854.pem
	I0701 12:26:42.093154  653531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/637854.pem
	I0701 12:26:42.099823  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/637854.pem /etc/ssl/certs/51391683.0"
	I0701 12:26:42.111701  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6378542.pem && ln -fs /usr/share/ca-certificates/6378542.pem /etc/ssl/certs/6378542.pem"
	I0701 12:26:42.125593  653531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6378542.pem
	I0701 12:26:42.130163  653531 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  1 12:11 /usr/share/ca-certificates/6378542.pem
	I0701 12:26:42.130246  653531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6378542.pem
	I0701 12:26:42.136102  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6378542.pem /etc/ssl/certs/3ec20f2e.0"
	I0701 12:26:42.147064  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0701 12:26:42.159086  653531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:26:42.163767  653531 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  1 12:05 /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:26:42.163864  653531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:26:42.170462  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0701 12:26:42.181119  653531 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0701 12:26:42.185711  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0701 12:26:42.191736  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0701 12:26:42.198232  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0701 12:26:42.204698  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0701 12:26:42.210909  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0701 12:26:42.216837  653531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0701 12:26:42.222755  653531 kubeadm.go:928] updating node {m03 192.168.39.97 8443 v1.30.2 docker true true} ...
	I0701 12:26:42.222878  653531 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-735960-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.97
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-735960 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0701 12:26:42.222906  653531 kube-vip.go:115] generating kube-vip config ...
	I0701 12:26:42.222955  653531 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0701 12:26:42.237298  653531 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0701 12:26:42.237376  653531 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0701 12:26:42.237455  653531 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0701 12:26:42.247439  653531 binaries.go:44] Found k8s binaries, skipping transfer
	I0701 12:26:42.247515  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0701 12:26:42.257290  653531 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0701 12:26:42.274152  653531 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0701 12:26:42.290241  653531 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0701 12:26:42.308095  653531 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0701 12:26:42.312034  653531 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 12:26:42.325214  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:26:42.447612  653531 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0701 12:26:42.465983  653531 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:26:42.466298  653531 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:26:42.468248  653531 out.go:177] * Verifying Kubernetes components...
	I0701 12:26:42.469706  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:26:42.625060  653531 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0701 12:26:42.647149  653531 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19166-630650/kubeconfig
	I0701 12:26:42.647532  653531 kapi.go:59] client config for ha-735960: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/client.crt", KeyFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/client.key", CAFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfbb80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0701 12:26:42.647632  653531 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.16:8443
	I0701 12:26:42.647948  653531 node_ready.go:35] waiting up to 6m0s for node "ha-735960-m03" to be "Ready" ...
	I0701 12:26:42.648043  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:42.648055  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:42.648066  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:42.648079  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:42.652553  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:26:43.148887  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:43.148913  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:43.148924  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:43.148931  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:43.152504  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:43.153020  653531 node_ready.go:49] node "ha-735960-m03" has status "Ready":"True"
	I0701 12:26:43.153041  653531 node_ready.go:38] duration metric: took 505.070913ms for node "ha-735960-m03" to be "Ready" ...
	I0701 12:26:43.153051  653531 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 12:26:43.153132  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods
	I0701 12:26:43.153144  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:43.153154  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:43.153161  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:43.159789  653531 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0701 12:26:43.167076  653531 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:43.167158  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:26:43.167167  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:43.167175  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:43.167179  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:43.169757  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:43.170310  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:26:43.170347  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:43.170357  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:43.170362  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:43.173097  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:43.173879  653531 pod_ready.go:92] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"True"
	I0701 12:26:43.173897  653531 pod_ready.go:81] duration metric: took 6.79477ms for pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:43.173905  653531 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-p4rtz" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:43.173970  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-p4rtz
	I0701 12:26:43.173977  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:43.173984  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:43.173987  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:43.176719  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:43.177389  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:26:43.177403  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:43.177410  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:43.177415  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:43.180272  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:43.180876  653531 pod_ready.go:92] pod "coredns-7db6d8ff4d-p4rtz" in "kube-system" namespace has status "Ready":"True"
	I0701 12:26:43.180892  653531 pod_ready.go:81] duration metric: took 6.981686ms for pod "coredns-7db6d8ff4d-p4rtz" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:43.180901  653531 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:43.180946  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960
	I0701 12:26:43.180953  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:43.180959  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:43.180963  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:43.183979  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:43.184715  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:26:43.184733  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:43.184744  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:43.184750  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:43.187303  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:43.187727  653531 pod_ready.go:92] pod "etcd-ha-735960" in "kube-system" namespace has status "Ready":"True"
	I0701 12:26:43.187743  653531 pod_ready.go:81] duration metric: took 6.837753ms for pod "etcd-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:43.187751  653531 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:43.187803  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m02
	I0701 12:26:43.187810  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:43.187816  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:43.187820  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:43.190206  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:43.190728  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:26:43.190744  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:43.190753  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:43.190761  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:43.193433  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:43.194190  653531 pod_ready.go:92] pod "etcd-ha-735960-m02" in "kube-system" namespace has status "Ready":"True"
	I0701 12:26:43.194207  653531 pod_ready.go:81] duration metric: took 6.448739ms for pod "etcd-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:43.194216  653531 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:43.349638  653531 request.go:629] Waited for 155.349ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:43.349754  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:43.349767  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:43.349778  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:43.349790  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:43.354862  653531 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0701 12:26:43.548911  653531 request.go:629] Waited for 193.270032ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:43.548983  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:43.549014  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:43.549029  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:43.549034  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:43.554047  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:26:43.749322  653531 request.go:629] Waited for 54.224497ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:43.749397  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:43.749405  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:43.749423  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:43.749433  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:43.753610  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:26:43.949318  653531 request.go:629] Waited for 194.40537ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:43.949442  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:43.949455  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:43.949466  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:43.949475  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:43.953476  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:44.195013  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:44.195041  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:44.195053  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:44.195058  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:44.198623  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:44.349775  653531 request.go:629] Waited for 150.337133ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:44.349881  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:44.349890  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:44.349901  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:44.349909  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:44.354832  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:26:44.694539  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:44.694560  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:44.694569  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:44.694573  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:44.698072  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:44.749262  653531 request.go:629] Waited for 50.212385ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:44.749342  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:44.749357  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:44.749376  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:44.749400  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:44.759594  653531 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0701 12:26:45.194608  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:45.194639  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:45.194651  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:45.194656  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:45.198135  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:45.199157  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:45.199178  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:45.199187  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:45.199193  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:45.201747  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:45.202475  653531 pod_ready.go:102] pod "etcd-ha-735960-m03" in "kube-system" namespace has status "Ready":"False"
	I0701 12:26:45.695358  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:45.695387  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:45.695398  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:45.695405  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:45.698583  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:45.699570  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:45.699591  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:45.699603  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:45.699611  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:45.702299  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:46.195334  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:46.195357  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:46.195366  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:46.195369  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:46.199158  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:46.200116  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:46.200134  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:46.200146  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:46.200153  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:46.203740  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:46.695210  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:46.695238  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:46.695250  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:46.695257  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:46.698972  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:46.699688  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:46.699709  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:46.699722  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:46.699728  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:46.703576  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:47.194463  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:47.194494  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:47.194504  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:47.194512  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:47.197423  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:47.198125  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:47.198144  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:47.198156  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:47.198166  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:47.201172  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:47.695417  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:47.695446  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:47.695457  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:47.695463  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:47.698528  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:47.699400  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:47.699424  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:47.699435  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:47.699440  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:47.702619  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:47.703202  653531 pod_ready.go:102] pod "etcd-ha-735960-m03" in "kube-system" namespace has status "Ready":"False"
	I0701 12:26:48.194609  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:48.194632  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:48.194640  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:48.194656  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:48.197877  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:48.198784  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:48.198804  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:48.198815  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:48.198819  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:48.201611  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:48.694433  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:48.694459  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:48.694471  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:48.694478  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:48.697539  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:48.698170  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:48.698185  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:48.698193  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:48.698196  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:48.700886  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:49.194905  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:49.194931  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:49.194942  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:49.194954  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:49.199572  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:26:49.200541  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:49.200560  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:49.200570  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:49.200575  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:49.204090  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:49.694531  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:49.694551  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:49.694559  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:49.694563  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:49.698105  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:49.699044  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:49.699062  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:49.699073  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:49.699078  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:49.701617  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:50.195294  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:50.195322  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:50.195333  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:50.195338  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:50.198820  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:50.199561  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:50.199579  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:50.199588  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:50.199594  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:50.202455  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:50.203029  653531 pod_ready.go:102] pod "etcd-ha-735960-m03" in "kube-system" namespace has status "Ready":"False"
	I0701 12:26:50.694678  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:50.694700  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:50.694708  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:50.694712  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:50.697694  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:50.698383  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:50.698401  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:50.698409  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:50.698413  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:50.701398  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:51.195484  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:51.195522  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:51.195535  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:51.195539  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:51.199113  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:51.199788  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:51.199804  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:51.199811  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:51.199815  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:51.202679  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:51.695276  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:51.695304  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:51.695318  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:51.695325  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:51.698725  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:51.699425  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:51.699444  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:51.699454  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:51.699461  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:51.702960  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:52.195136  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:52.195168  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:52.195178  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:52.195182  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:52.198421  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:52.199068  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:52.199081  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:52.199089  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:52.199133  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:52.201737  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:52.695128  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:52.695153  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:52.695161  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:52.695165  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:52.698791  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:52.699625  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:52.699640  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:52.699647  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:52.699666  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:52.702284  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:52.702827  653531 pod_ready.go:102] pod "etcd-ha-735960-m03" in "kube-system" namespace has status "Ready":"False"
	I0701 12:26:53.194518  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:53.194542  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:53.194550  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:53.194555  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:53.197969  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:53.198583  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:53.198602  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:53.198610  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:53.198615  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:53.201376  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:53.695296  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:53.695318  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:53.695326  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:53.695331  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:53.699078  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:53.699884  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:53.699910  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:53.699922  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:53.699929  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:53.703186  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:54.195014  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:54.195043  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:54.195054  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:54.195058  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:54.199057  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:54.199733  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:54.199750  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:54.199758  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:54.199763  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:54.202961  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:54.695177  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:54.695212  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:54.695225  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:54.695233  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:54.698371  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:54.699201  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:54.699216  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:54.699224  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:54.699227  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:54.702002  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:55.194543  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:55.194566  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:55.194574  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:55.194579  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:55.198201  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:55.198814  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:55.198832  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:55.198839  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:55.198843  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:55.201469  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:55.201993  653531 pod_ready.go:102] pod "etcd-ha-735960-m03" in "kube-system" namespace has status "Ready":"False"
	I0701 12:26:55.694950  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:55.694972  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:55.694983  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:55.694990  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:55.698498  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:55.699087  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:55.699101  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:55.699108  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:55.699112  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:55.701817  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:56.194521  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:26:56.194544  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:56.194552  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:56.194557  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:56.197837  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:56.198482  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:56.198499  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:56.198505  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:56.198509  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:56.201147  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:56.201653  653531 pod_ready.go:92] pod "etcd-ha-735960-m03" in "kube-system" namespace has status "Ready":"True"
	I0701 12:26:56.201674  653531 pod_ready.go:81] duration metric: took 13.007452083s for pod "etcd-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:56.201692  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:56.201750  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960
	I0701 12:26:56.201757  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:56.201764  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:56.201770  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:56.204418  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:56.205132  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:26:56.205148  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:56.205154  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:56.205158  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:56.207485  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:56.207887  653531 pod_ready.go:92] pod "kube-apiserver-ha-735960" in "kube-system" namespace has status "Ready":"True"
	I0701 12:26:56.207907  653531 pod_ready.go:81] duration metric: took 6.206212ms for pod "kube-apiserver-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:56.207916  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:56.207971  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m02
	I0701 12:26:56.207981  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:56.207988  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:56.207992  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:56.210274  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:56.210769  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:26:56.210784  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:56.210791  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:56.210795  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:56.213307  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:56.213730  653531 pod_ready.go:92] pod "kube-apiserver-ha-735960-m02" in "kube-system" namespace has status "Ready":"True"
	I0701 12:26:56.213745  653531 pod_ready.go:81] duration metric: took 5.823695ms for pod "kube-apiserver-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:56.213752  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:26:56.213799  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:26:56.213806  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:56.213813  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:56.213817  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:56.221893  653531 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0701 12:26:56.222630  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:56.222650  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:56.222661  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:56.222665  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:56.225298  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:56.714434  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:26:56.714457  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:56.714466  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:56.714473  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:56.717715  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:56.718387  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:56.718404  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:56.718414  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:56.718420  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:56.721172  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:57.213955  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:26:57.213979  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:57.213987  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:57.213992  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:57.217394  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:57.218050  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:57.218071  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:57.218082  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:57.218088  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:57.221478  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:57.714757  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:26:57.714779  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:57.714787  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:57.714792  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:57.717911  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:57.718695  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:57.718720  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:57.718734  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:57.718740  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:57.721551  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:58.214582  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:26:58.214605  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:58.214613  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:58.214616  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:58.218396  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:58.219147  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:58.219167  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:58.219174  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:58.219178  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:58.221830  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:58.222386  653531 pod_ready.go:102] pod "kube-apiserver-ha-735960-m03" in "kube-system" namespace has status "Ready":"False"
	I0701 12:26:58.714864  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:26:58.714890  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:58.714901  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:58.714906  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:58.718181  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:58.718855  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:58.718874  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:58.718881  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:58.718885  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:58.722484  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:59.214439  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:26:59.214472  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:59.214484  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:59.214491  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:59.217758  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:59.218712  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:59.218732  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:59.218738  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:59.218742  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:59.221527  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:26:59.713995  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:26:59.714020  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:59.714028  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:59.714033  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:59.717121  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:26:59.717838  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:26:59.717855  653531 round_trippers.go:469] Request Headers:
	I0701 12:26:59.717862  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:26:59.717866  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:26:59.720568  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:27:00.214542  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:27:00.214568  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:00.214578  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:00.214583  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:00.218220  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:00.218919  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:00.218938  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:00.218947  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:00.218954  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:00.222119  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:00.223039  653531 pod_ready.go:102] pod "kube-apiserver-ha-735960-m03" in "kube-system" namespace has status "Ready":"False"
	I0701 12:27:00.714993  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:27:00.715015  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:00.715023  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:00.715027  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:00.718022  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:27:00.718871  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:00.718894  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:00.718905  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:00.718910  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:00.721660  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:27:01.214293  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:27:01.214320  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:01.214345  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:01.214354  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:01.217660  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:01.218619  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:01.218636  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:01.218645  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:01.218649  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:01.221248  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:27:01.714569  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:27:01.714593  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:01.714602  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:01.714607  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:01.717986  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:01.718877  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:01.718900  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:01.718912  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:01.718917  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:01.722103  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:02.213928  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:27:02.213953  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:02.213961  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:02.213965  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:02.217318  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:02.218078  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:02.218093  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:02.218099  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:02.218102  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:02.221493  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:02.714825  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:27:02.714849  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:02.714857  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:02.714862  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:02.718359  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:02.719162  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:02.719180  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:02.719188  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:02.719193  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:02.722363  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:02.723005  653531 pod_ready.go:92] pod "kube-apiserver-ha-735960-m03" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:02.723029  653531 pod_ready.go:81] duration metric: took 6.509269845s for pod "kube-apiserver-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:02.723044  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:02.723152  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-735960
	I0701 12:27:02.723163  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:02.723174  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:02.723186  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:02.726502  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:02.727250  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:27:02.727266  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:02.727277  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:02.727280  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:02.730522  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:02.731090  653531 pod_ready.go:92] pod "kube-controller-manager-ha-735960" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:02.731116  653531 pod_ready.go:81] duration metric: took 8.062099ms for pod "kube-controller-manager-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:02.731129  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:02.731206  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-735960-m02
	I0701 12:27:02.731216  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:02.731226  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:02.731232  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:02.734354  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:02.735350  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:27:02.735370  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:02.735378  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:02.735381  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:02.738250  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:27:02.739014  653531 pod_ready.go:92] pod "kube-controller-manager-ha-735960-m02" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:02.739035  653531 pod_ready.go:81] duration metric: took 7.898052ms for pod "kube-controller-manager-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:02.739045  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:02.739108  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-735960-m03
	I0701 12:27:02.739116  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:02.739125  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:02.739134  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:02.742376  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:02.743084  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:02.743106  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:02.743117  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:02.743121  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:02.746455  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:02.747046  653531 pod_ready.go:92] pod "kube-controller-manager-ha-735960-m03" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:02.747075  653531 pod_ready.go:81] duration metric: took 8.017741ms for pod "kube-controller-manager-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:02.747091  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-25ssf" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:02.747213  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-25ssf
	I0701 12:27:02.747226  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:02.747237  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:02.747242  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:02.750009  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:27:02.750887  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m04
	I0701 12:27:02.750910  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:02.750941  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:02.750947  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:02.753841  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:27:02.754410  653531 pod_ready.go:97] node "ha-735960-m04" hosting pod "kube-proxy-25ssf" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-735960-m04" has status "Ready":"Unknown"
	I0701 12:27:02.754439  653531 pod_ready.go:81] duration metric: took 7.336267ms for pod "kube-proxy-25ssf" in "kube-system" namespace to be "Ready" ...
	E0701 12:27:02.754453  653531 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-735960-m04" hosting pod "kube-proxy-25ssf" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-735960-m04" has status "Ready":"Unknown"
	I0701 12:27:02.754464  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-776rt" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:02.915931  653531 request.go:629] Waited for 161.334912ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-776rt
	I0701 12:27:02.916009  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-776rt
	I0701 12:27:02.916016  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:02.916026  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:02.916032  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:02.922578  653531 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0701 12:27:03.115563  653531 request.go:629] Waited for 192.243271ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:03.115665  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:03.115679  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:03.115693  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:03.115702  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:03.119673  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:03.120379  653531 pod_ready.go:92] pod "kube-proxy-776rt" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:03.120399  653531 pod_ready.go:81] duration metric: took 365.926734ms for pod "kube-proxy-776rt" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:03.120409  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b6knb" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:03.315515  653531 request.go:629] Waited for 195.003147ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b6knb
	I0701 12:27:03.315575  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b6knb
	I0701 12:27:03.315580  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:03.315588  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:03.315593  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:03.319367  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:03.515329  653531 request.go:629] Waited for 195.408895ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:27:03.515421  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:27:03.515429  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:03.515440  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:03.515452  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:03.518825  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:03.519611  653531 pod_ready.go:92] pod "kube-proxy-b6knb" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:03.519633  653531 pod_ready.go:81] duration metric: took 399.213433ms for pod "kube-proxy-b6knb" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:03.519642  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lphzn" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:03.715721  653531 request.go:629] Waited for 195.977677ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lphzn
	I0701 12:27:03.715811  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lphzn
	I0701 12:27:03.715820  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:03.715828  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:03.715833  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:03.720058  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:27:03.915338  653531 request.go:629] Waited for 194.486914ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:27:03.915438  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:27:03.915447  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:03.915455  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:03.915462  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:03.919143  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:03.919765  653531 pod_ready.go:92] pod "kube-proxy-lphzn" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:03.919789  653531 pod_ready.go:81] duration metric: took 400.14123ms for pod "kube-proxy-lphzn" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:03.919800  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:04.114907  653531 request.go:629] Waited for 195.032639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960
	I0701 12:27:04.114983  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960
	I0701 12:27:04.115004  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:04.115019  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:04.115027  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:04.119283  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:27:04.315128  653531 request.go:629] Waited for 195.065236ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:27:04.315231  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:27:04.315243  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:04.315255  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:04.315264  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:04.319107  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:04.319792  653531 pod_ready.go:92] pod "kube-scheduler-ha-735960" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:04.319821  653531 pod_ready.go:81] duration metric: took 400.011957ms for pod "kube-scheduler-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:04.319838  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:04.515786  653531 request.go:629] Waited for 195.848501ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960-m02
	I0701 12:27:04.515865  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960-m02
	I0701 12:27:04.515872  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:04.515885  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:04.515894  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:04.519607  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:04.715555  653531 request.go:629] Waited for 195.254305ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:27:04.715662  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:27:04.715673  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:04.715686  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:04.715696  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:04.718989  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:04.719533  653531 pod_ready.go:92] pod "kube-scheduler-ha-735960-m02" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:04.719555  653531 pod_ready.go:81] duration metric: took 399.709368ms for pod "kube-scheduler-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:04.719565  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:04.915742  653531 request.go:629] Waited for 196.076319ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960-m03
	I0701 12:27:04.915873  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960-m03
	I0701 12:27:04.915884  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:04.915892  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:04.915896  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:04.919910  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:05.114903  653531 request.go:629] Waited for 194.321141ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:05.114998  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:05.115010  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:05.115020  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:05.115029  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:05.118835  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:05.119325  653531 pod_ready.go:92] pod "kube-scheduler-ha-735960-m03" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:05.119348  653531 pod_ready.go:81] duration metric: took 399.776156ms for pod "kube-scheduler-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:05.119360  653531 pod_ready.go:38] duration metric: took 21.966297492s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 12:27:05.119380  653531 api_server.go:52] waiting for apiserver process to appear ...
	I0701 12:27:05.119446  653531 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 12:27:05.134970  653531 api_server.go:72] duration metric: took 22.668924734s to wait for apiserver process to appear ...
	I0701 12:27:05.135005  653531 api_server.go:88] waiting for apiserver healthz status ...
	I0701 12:27:05.135037  653531 api_server.go:253] Checking apiserver healthz at https://192.168.39.16:8443/healthz ...
	I0701 12:27:05.139924  653531 api_server.go:279] https://192.168.39.16:8443/healthz returned 200:
	ok
	I0701 12:27:05.140029  653531 round_trippers.go:463] GET https://192.168.39.16:8443/version
	I0701 12:27:05.140040  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:05.140052  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:05.140060  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:05.141045  653531 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0701 12:27:05.141124  653531 api_server.go:141] control plane version: v1.30.2
	I0701 12:27:05.141142  653531 api_server.go:131] duration metric: took 6.129152ms to wait for apiserver health ...
	I0701 12:27:05.141156  653531 system_pods.go:43] waiting for kube-system pods to appear ...
	I0701 12:27:05.315496  653531 request.go:629] Waited for 174.257848ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods
	I0701 12:27:05.315603  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods
	I0701 12:27:05.315615  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:05.315627  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:05.315640  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:05.331176  653531 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0701 12:27:05.341126  653531 system_pods.go:59] 26 kube-system pods found
	I0701 12:27:05.341168  653531 system_pods.go:61] "coredns-7db6d8ff4d-nk4lf" [c03dd635-a82d-4f18-bd72-ec575f91867e] Running
	I0701 12:27:05.341173  653531 system_pods.go:61] "coredns-7db6d8ff4d-p4rtz" [267efba7-bf34-48d5-ab15-5bda45ff2f4f] Running
	I0701 12:27:05.341177  653531 system_pods.go:61] "etcd-ha-735960" [4b98745c-292f-42b5-977c-69c50fd241f1] Running
	I0701 12:27:05.341181  653531 system_pods.go:61] "etcd-ha-735960-m02" [fed8cdfa-8428-47e0-84ef-05297ad232f8] Running
	I0701 12:27:05.341184  653531 system_pods.go:61] "etcd-ha-735960-m03" [50b07bc3-ff6b-487d-8654-901d96892868] Running
	I0701 12:27:05.341187  653531 system_pods.go:61] "kindnet-2424m" [aa18d5dd-f6eb-4f04-a61e-b0b257e214af] Running
	I0701 12:27:05.341190  653531 system_pods.go:61] "kindnet-6gx8s" [7f46a773-a075-476c-9e54-89f125b4b57a] Running
	I0701 12:27:05.341195  653531 system_pods.go:61] "kindnet-7f6hm" [a8c302b4-1163-4d4f-bfe3-4fd3b5d23cf0] Running
	I0701 12:27:05.341199  653531 system_pods.go:61] "kindnet-bztzv" [7afa0e45-3d10-40bc-b422-7005a3ca9d3a] Running
	I0701 12:27:05.341203  653531 system_pods.go:61] "kube-apiserver-ha-735960" [ad041aaa-465a-4d8a-a8dc-b7665e1d587d] Running
	I0701 12:27:05.341208  653531 system_pods.go:61] "kube-apiserver-ha-735960-m02" [ba28f48e-1c18-47e3-ab11-a9b5588c5c32] Running
	I0701 12:27:05.341213  653531 system_pods.go:61] "kube-apiserver-ha-735960-m03" [baafa3bf-78ee-4269-9591-b0440927e055] Running
	I0701 12:27:05.341218  653531 system_pods.go:61] "kube-controller-manager-ha-735960" [3f0f0cf5-329d-47bc-b922-7583902e2607] Running
	I0701 12:27:05.341222  653531 system_pods.go:61] "kube-controller-manager-ha-735960-m02" [258fde18-ac5c-4446-842b-9465529b154c] Running
	I0701 12:27:05.341231  653531 system_pods.go:61] "kube-controller-manager-ha-735960-m03" [79acc56b-a9e0-4d4b-bc64-1a3a36ddf051] Running
	I0701 12:27:05.341235  653531 system_pods.go:61] "kube-proxy-25ssf" [11f0dc23-ab9d-4d39-988d-4c44dfde86cd] Running
	I0701 12:27:05.341244  653531 system_pods.go:61] "kube-proxy-776rt" [5666dac7-924e-4429-bd1d-a1a5647cc611] Running
	I0701 12:27:05.341248  653531 system_pods.go:61] "kube-proxy-b6knb" [eb36e930-5799-4ff7-821a-ccb22303cd1b] Running
	I0701 12:27:05.341253  653531 system_pods.go:61] "kube-proxy-lphzn" [0761a7a6-740e-4cde-9ab5-e02e8d417907] Running
	I0701 12:27:05.341258  653531 system_pods.go:61] "kube-scheduler-ha-735960" [c624cf42-a7d6-4aaf-859d-1aeaf29f9acb] Running
	I0701 12:27:05.341266  653531 system_pods.go:61] "kube-scheduler-ha-735960-m02" [7de78af7-2d79-46dc-bd34-f221d79fde06] Running
	I0701 12:27:05.341276  653531 system_pods.go:61] "kube-scheduler-ha-735960-m03" [9f9a2030-9332-44af-b8dc-3b4609e53f91] Running
	I0701 12:27:05.341284  653531 system_pods.go:61] "kube-vip-ha-735960" [4299679a-c145-4f4f-8ec6-3cd468b98ef1] Running
	I0701 12:27:05.341289  653531 system_pods.go:61] "kube-vip-ha-735960-m02" [1c9b13e1-515c-43c0-8d99-5ad1c1807727] Running
	I0701 12:27:05.341296  653531 system_pods.go:61] "kube-vip-ha-735960-m03" [7069ea7c-5461-4fe6-a969-97fe33396ebb] Running
	I0701 12:27:05.341300  653531 system_pods.go:61] "storage-provisioner" [f5c4f7f9-d648-4019-a5ea-6ce59f6c5663] Running
	I0701 12:27:05.341308  653531 system_pods.go:74] duration metric: took 200.142768ms to wait for pod list to return data ...
	I0701 12:27:05.341319  653531 default_sa.go:34] waiting for default service account to be created ...
	I0701 12:27:05.515805  653531 request.go:629] Waited for 174.38988ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/default/serviceaccounts
	I0701 12:27:05.515869  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/default/serviceaccounts
	I0701 12:27:05.515874  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:05.515882  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:05.515886  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:05.519545  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:05.519680  653531 default_sa.go:45] found service account: "default"
	I0701 12:27:05.519701  653531 default_sa.go:55] duration metric: took 178.373792ms for default service account to be created ...
	I0701 12:27:05.519712  653531 system_pods.go:116] waiting for k8s-apps to be running ...
	I0701 12:27:05.715337  653531 request.go:629] Waited for 195.548539ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods
	I0701 12:27:05.715405  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods
	I0701 12:27:05.715411  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:05.715423  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:05.715431  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:05.722571  653531 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0701 12:27:05.729587  653531 system_pods.go:86] 26 kube-system pods found
	I0701 12:27:05.729628  653531 system_pods.go:89] "coredns-7db6d8ff4d-nk4lf" [c03dd635-a82d-4f18-bd72-ec575f91867e] Running
	I0701 12:27:05.729636  653531 system_pods.go:89] "coredns-7db6d8ff4d-p4rtz" [267efba7-bf34-48d5-ab15-5bda45ff2f4f] Running
	I0701 12:27:05.729642  653531 system_pods.go:89] "etcd-ha-735960" [4b98745c-292f-42b5-977c-69c50fd241f1] Running
	I0701 12:27:05.729649  653531 system_pods.go:89] "etcd-ha-735960-m02" [fed8cdfa-8428-47e0-84ef-05297ad232f8] Running
	I0701 12:27:05.729655  653531 system_pods.go:89] "etcd-ha-735960-m03" [50b07bc3-ff6b-487d-8654-901d96892868] Running
	I0701 12:27:05.729661  653531 system_pods.go:89] "kindnet-2424m" [aa18d5dd-f6eb-4f04-a61e-b0b257e214af] Running
	I0701 12:27:05.729666  653531 system_pods.go:89] "kindnet-6gx8s" [7f46a773-a075-476c-9e54-89f125b4b57a] Running
	I0701 12:27:05.729671  653531 system_pods.go:89] "kindnet-7f6hm" [a8c302b4-1163-4d4f-bfe3-4fd3b5d23cf0] Running
	I0701 12:27:05.729677  653531 system_pods.go:89] "kindnet-bztzv" [7afa0e45-3d10-40bc-b422-7005a3ca9d3a] Running
	I0701 12:27:05.729684  653531 system_pods.go:89] "kube-apiserver-ha-735960" [ad041aaa-465a-4d8a-a8dc-b7665e1d587d] Running
	I0701 12:27:05.729689  653531 system_pods.go:89] "kube-apiserver-ha-735960-m02" [ba28f48e-1c18-47e3-ab11-a9b5588c5c32] Running
	I0701 12:27:05.729695  653531 system_pods.go:89] "kube-apiserver-ha-735960-m03" [baafa3bf-78ee-4269-9591-b0440927e055] Running
	I0701 12:27:05.729702  653531 system_pods.go:89] "kube-controller-manager-ha-735960" [3f0f0cf5-329d-47bc-b922-7583902e2607] Running
	I0701 12:27:05.729710  653531 system_pods.go:89] "kube-controller-manager-ha-735960-m02" [258fde18-ac5c-4446-842b-9465529b154c] Running
	I0701 12:27:05.729720  653531 system_pods.go:89] "kube-controller-manager-ha-735960-m03" [79acc56b-a9e0-4d4b-bc64-1a3a36ddf051] Running
	I0701 12:27:05.729729  653531 system_pods.go:89] "kube-proxy-25ssf" [11f0dc23-ab9d-4d39-988d-4c44dfde86cd] Running
	I0701 12:27:05.729737  653531 system_pods.go:89] "kube-proxy-776rt" [5666dac7-924e-4429-bd1d-a1a5647cc611] Running
	I0701 12:27:05.729745  653531 system_pods.go:89] "kube-proxy-b6knb" [eb36e930-5799-4ff7-821a-ccb22303cd1b] Running
	I0701 12:27:05.729755  653531 system_pods.go:89] "kube-proxy-lphzn" [0761a7a6-740e-4cde-9ab5-e02e8d417907] Running
	I0701 12:27:05.729764  653531 system_pods.go:89] "kube-scheduler-ha-735960" [c624cf42-a7d6-4aaf-859d-1aeaf29f9acb] Running
	I0701 12:27:05.729770  653531 system_pods.go:89] "kube-scheduler-ha-735960-m02" [7de78af7-2d79-46dc-bd34-f221d79fde06] Running
	I0701 12:27:05.729776  653531 system_pods.go:89] "kube-scheduler-ha-735960-m03" [9f9a2030-9332-44af-b8dc-3b4609e53f91] Running
	I0701 12:27:05.729783  653531 system_pods.go:89] "kube-vip-ha-735960" [4299679a-c145-4f4f-8ec6-3cd468b98ef1] Running
	I0701 12:27:05.729789  653531 system_pods.go:89] "kube-vip-ha-735960-m02" [1c9b13e1-515c-43c0-8d99-5ad1c1807727] Running
	I0701 12:27:05.729796  653531 system_pods.go:89] "kube-vip-ha-735960-m03" [7069ea7c-5461-4fe6-a969-97fe33396ebb] Running
	I0701 12:27:05.729802  653531 system_pods.go:89] "storage-provisioner" [f5c4f7f9-d648-4019-a5ea-6ce59f6c5663] Running
	I0701 12:27:05.729815  653531 system_pods.go:126] duration metric: took 210.095212ms to wait for k8s-apps to be running ...
	I0701 12:27:05.729829  653531 system_svc.go:44] waiting for kubelet service to be running ....
	I0701 12:27:05.729888  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 12:27:05.745646  653531 system_svc.go:56] duration metric: took 15.808828ms WaitForService to wait for kubelet
	I0701 12:27:05.745679  653531 kubeadm.go:576] duration metric: took 23.279640822s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 12:27:05.745702  653531 node_conditions.go:102] verifying NodePressure condition ...
	I0701 12:27:05.915161  653531 request.go:629] Waited for 169.354932ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes
	I0701 12:27:05.915221  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes
	I0701 12:27:05.915226  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:05.915234  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:05.915239  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:05.919105  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:05.920307  653531 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0701 12:27:05.920336  653531 node_conditions.go:123] node cpu capacity is 2
	I0701 12:27:05.920352  653531 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0701 12:27:05.920357  653531 node_conditions.go:123] node cpu capacity is 2
	I0701 12:27:05.920361  653531 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0701 12:27:05.920366  653531 node_conditions.go:123] node cpu capacity is 2
	I0701 12:27:05.920370  653531 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0701 12:27:05.920375  653531 node_conditions.go:123] node cpu capacity is 2
	I0701 12:27:05.920382  653531 node_conditions.go:105] duration metric: took 174.672945ms to run NodePressure ...
	I0701 12:27:05.920400  653531 start.go:240] waiting for startup goroutines ...
	I0701 12:27:05.920438  653531 start.go:254] writing updated cluster config ...
	I0701 12:27:05.922556  653531 out.go:177] 
	I0701 12:27:05.924320  653531 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:27:05.924444  653531 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
	I0701 12:27:05.926228  653531 out.go:177] * Starting "ha-735960-m04" worker node in "ha-735960" cluster
	I0701 12:27:05.927583  653531 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0701 12:27:05.927623  653531 cache.go:56] Caching tarball of preloaded images
	I0701 12:27:05.927740  653531 preload.go:173] Found /home/jenkins/minikube-integration/19166-630650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0701 12:27:05.927753  653531 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0701 12:27:05.927868  653531 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
	I0701 12:27:05.928081  653531 start.go:360] acquireMachinesLock for ha-735960-m04: {Name:mk43a6c0c0c15a237623df377ad65b82186c0ad8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:27:05.928138  653531 start.go:364] duration metric: took 34.293µs to acquireMachinesLock for "ha-735960-m04"
	I0701 12:27:05.928160  653531 start.go:96] Skipping create...Using existing machine configuration
	I0701 12:27:05.928170  653531 fix.go:54] fixHost starting: m04
	I0701 12:27:05.928452  653531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:27:05.928496  653531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:27:05.944734  653531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39337
	I0701 12:27:05.945306  653531 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:27:05.945856  653531 main.go:141] libmachine: Using API Version  1
	I0701 12:27:05.945878  653531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:27:05.946270  653531 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:27:05.946505  653531 main.go:141] libmachine: (ha-735960-m04) Calling .DriverName
	I0701 12:27:05.946718  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetState
	I0701 12:27:05.948900  653531 fix.go:112] recreateIfNeeded on ha-735960-m04: state=Stopped err=<nil>
	I0701 12:27:05.948936  653531 main.go:141] libmachine: (ha-735960-m04) Calling .DriverName
	W0701 12:27:05.949137  653531 fix.go:138] unexpected machine state, will restart: <nil>
	I0701 12:27:05.951007  653531 out.go:177] * Restarting existing kvm2 VM for "ha-735960-m04" ...
	I0701 12:27:05.952219  653531 main.go:141] libmachine: (ha-735960-m04) Calling .Start
	I0701 12:27:05.952428  653531 main.go:141] libmachine: (ha-735960-m04) Ensuring networks are active...
	I0701 12:27:05.953378  653531 main.go:141] libmachine: (ha-735960-m04) Ensuring network default is active
	I0701 12:27:05.953815  653531 main.go:141] libmachine: (ha-735960-m04) Ensuring network mk-ha-735960 is active
	I0701 12:27:05.954229  653531 main.go:141] libmachine: (ha-735960-m04) Getting domain xml...
	I0701 12:27:05.954857  653531 main.go:141] libmachine: (ha-735960-m04) Creating domain...
	I0701 12:27:07.274791  653531 main.go:141] libmachine: (ha-735960-m04) Waiting to get IP...
	I0701 12:27:07.275684  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:07.276224  653531 main.go:141] libmachine: (ha-735960-m04) DBG | unable to find current IP address of domain ha-735960-m04 in network mk-ha-735960
	I0701 12:27:07.276269  653531 main.go:141] libmachine: (ha-735960-m04) DBG | I0701 12:27:07.276176  654403 retry.go:31] will retry after 236.931472ms: waiting for machine to come up
	I0701 12:27:07.514910  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:07.515487  653531 main.go:141] libmachine: (ha-735960-m04) DBG | unable to find current IP address of domain ha-735960-m04 in network mk-ha-735960
	I0701 12:27:07.515520  653531 main.go:141] libmachine: (ha-735960-m04) DBG | I0701 12:27:07.515422  654403 retry.go:31] will retry after 376.766943ms: waiting for machine to come up
	I0701 12:27:07.894235  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:07.894716  653531 main.go:141] libmachine: (ha-735960-m04) DBG | unable to find current IP address of domain ha-735960-m04 in network mk-ha-735960
	I0701 12:27:07.894748  653531 main.go:141] libmachine: (ha-735960-m04) DBG | I0701 12:27:07.894658  654403 retry.go:31] will retry after 389.939732ms: waiting for machine to come up
	I0701 12:27:08.286528  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:08.287041  653531 main.go:141] libmachine: (ha-735960-m04) DBG | unable to find current IP address of domain ha-735960-m04 in network mk-ha-735960
	I0701 12:27:08.287066  653531 main.go:141] libmachine: (ha-735960-m04) DBG | I0701 12:27:08.286982  654403 retry.go:31] will retry after 542.184171ms: waiting for machine to come up
	I0701 12:27:08.831459  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:08.832024  653531 main.go:141] libmachine: (ha-735960-m04) DBG | unable to find current IP address of domain ha-735960-m04 in network mk-ha-735960
	I0701 12:27:08.832105  653531 main.go:141] libmachine: (ha-735960-m04) DBG | I0701 12:27:08.832069  654403 retry.go:31] will retry after 609.488369ms: waiting for machine to come up
	I0701 12:27:09.442798  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:09.443236  653531 main.go:141] libmachine: (ha-735960-m04) DBG | unable to find current IP address of domain ha-735960-m04 in network mk-ha-735960
	I0701 12:27:09.443272  653531 main.go:141] libmachine: (ha-735960-m04) DBG | I0701 12:27:09.443174  654403 retry.go:31] will retry after 777.604605ms: waiting for machine to come up
	I0701 12:27:10.221860  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:10.222317  653531 main.go:141] libmachine: (ha-735960-m04) DBG | unable to find current IP address of domain ha-735960-m04 in network mk-ha-735960
	I0701 12:27:10.222352  653531 main.go:141] libmachine: (ha-735960-m04) DBG | I0701 12:27:10.222242  654403 retry.go:31] will retry after 1.013463977s: waiting for machine to come up
	I0701 12:27:11.237171  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:11.237628  653531 main.go:141] libmachine: (ha-735960-m04) DBG | unable to find current IP address of domain ha-735960-m04 in network mk-ha-735960
	I0701 12:27:11.237658  653531 main.go:141] libmachine: (ha-735960-m04) DBG | I0701 12:27:11.237572  654403 retry.go:31] will retry after 1.368493369s: waiting for machine to come up
	I0701 12:27:12.607736  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:12.608308  653531 main.go:141] libmachine: (ha-735960-m04) DBG | unable to find current IP address of domain ha-735960-m04 in network mk-ha-735960
	I0701 12:27:12.608342  653531 main.go:141] libmachine: (ha-735960-m04) DBG | I0701 12:27:12.608254  654403 retry.go:31] will retry after 1.709127759s: waiting for machine to come up
	I0701 12:27:14.320033  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:14.320531  653531 main.go:141] libmachine: (ha-735960-m04) DBG | unable to find current IP address of domain ha-735960-m04 in network mk-ha-735960
	I0701 12:27:14.320565  653531 main.go:141] libmachine: (ha-735960-m04) DBG | I0701 12:27:14.320491  654403 retry.go:31] will retry after 2.145058749s: waiting for machine to come up
	I0701 12:27:16.466840  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:16.467246  653531 main.go:141] libmachine: (ha-735960-m04) DBG | unable to find current IP address of domain ha-735960-m04 in network mk-ha-735960
	I0701 12:27:16.467275  653531 main.go:141] libmachine: (ha-735960-m04) DBG | I0701 12:27:16.467196  654403 retry.go:31] will retry after 2.340416682s: waiting for machine to come up
	I0701 12:27:18.809756  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:18.810215  653531 main.go:141] libmachine: (ha-735960-m04) DBG | unable to find current IP address of domain ha-735960-m04 in network mk-ha-735960
	I0701 12:27:18.810245  653531 main.go:141] libmachine: (ha-735960-m04) DBG | I0701 12:27:18.810155  654403 retry.go:31] will retry after 2.893605535s: waiting for machine to come up
	I0701 12:27:21.705535  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:21.706011  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has current primary IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:21.706036  653531 main.go:141] libmachine: (ha-735960-m04) Found IP for machine: 192.168.39.60
	I0701 12:27:21.706050  653531 main.go:141] libmachine: (ha-735960-m04) Reserving static IP address...
	I0701 12:27:21.706638  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "ha-735960-m04", mac: "52:54:00:2d:8e:6d", ip: "192.168.39.60"} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:21.706671  653531 main.go:141] libmachine: (ha-735960-m04) Reserved static IP address: 192.168.39.60
	I0701 12:27:21.706689  653531 main.go:141] libmachine: (ha-735960-m04) DBG | skip adding static IP to network mk-ha-735960 - found existing host DHCP lease matching {name: "ha-735960-m04", mac: "52:54:00:2d:8e:6d", ip: "192.168.39.60"}
	I0701 12:27:21.706703  653531 main.go:141] libmachine: (ha-735960-m04) DBG | Getting to WaitForSSH function...
	I0701 12:27:21.706715  653531 main.go:141] libmachine: (ha-735960-m04) Waiting for SSH to be available...
	I0701 12:27:21.709236  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:21.709702  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:21.709729  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:21.709818  653531 main.go:141] libmachine: (ha-735960-m04) DBG | Using SSH client type: external
	I0701 12:27:21.709841  653531 main.go:141] libmachine: (ha-735960-m04) DBG | Using SSH private key: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m04/id_rsa (-rw-------)
	I0701 12:27:21.709870  653531 main.go:141] libmachine: (ha-735960-m04) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.60 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m04/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0701 12:27:21.709885  653531 main.go:141] libmachine: (ha-735960-m04) DBG | About to run SSH command:
	I0701 12:27:21.709897  653531 main.go:141] libmachine: (ha-735960-m04) DBG | exit 0
	I0701 12:27:21.838462  653531 main.go:141] libmachine: (ha-735960-m04) DBG | SSH cmd err, output: <nil>: 
	I0701 12:27:21.838803  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetConfigRaw
	I0701 12:27:21.839497  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetIP
	I0701 12:27:21.842255  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:21.842727  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:21.842764  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:21.843067  653531 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
	I0701 12:27:21.843309  653531 machine.go:94] provisionDockerMachine start ...
	I0701 12:27:21.843332  653531 main.go:141] libmachine: (ha-735960-m04) Calling .DriverName
	I0701 12:27:21.843625  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHHostname
	I0701 12:27:21.846158  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:21.846625  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:21.846658  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:21.846874  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHPort
	I0701 12:27:21.847122  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:21.847313  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:21.847496  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHUsername
	I0701 12:27:21.847763  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:27:21.847995  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0701 12:27:21.848012  653531 main.go:141] libmachine: About to run SSH command:
	hostname
	I0701 12:27:21.958527  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0701 12:27:21.958560  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetMachineName
	I0701 12:27:21.958896  653531 buildroot.go:166] provisioning hostname "ha-735960-m04"
	I0701 12:27:21.958928  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetMachineName
	I0701 12:27:21.959168  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHHostname
	I0701 12:27:21.961718  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:21.962176  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:21.962212  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:21.962410  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHPort
	I0701 12:27:21.962629  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:21.962804  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:21.962930  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHUsername
	I0701 12:27:21.963089  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:27:21.963293  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0701 12:27:21.963311  653531 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-735960-m04 && echo "ha-735960-m04" | sudo tee /etc/hostname
	I0701 12:27:22.089150  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-735960-m04
	
	I0701 12:27:22.089185  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHHostname
	I0701 12:27:22.092352  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:22.092805  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:22.092829  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:22.093059  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHPort
	I0701 12:27:22.093293  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:22.093532  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:22.093680  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHUsername
	I0701 12:27:22.093947  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:27:22.094124  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0701 12:27:22.094152  653531 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-735960-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-735960-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-735960-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0701 12:27:22.211873  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0701 12:27:22.211908  653531 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19166-630650/.minikube CaCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19166-630650/.minikube}
	I0701 12:27:22.211930  653531 buildroot.go:174] setting up certificates
	I0701 12:27:22.211938  653531 provision.go:84] configureAuth start
	I0701 12:27:22.211947  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetMachineName
	I0701 12:27:22.212269  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetIP
	I0701 12:27:22.215120  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:22.215523  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:22.215555  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:22.215810  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHHostname
	I0701 12:27:22.218161  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:22.218800  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:22.218836  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:22.219044  653531 provision.go:143] copyHostCerts
	I0701 12:27:22.219086  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem
	I0701 12:27:22.219130  653531 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem, removing ...
	I0701 12:27:22.219141  653531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem
	I0701 12:27:22.219226  653531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem (1078 bytes)
	I0701 12:27:22.219330  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem
	I0701 12:27:22.219356  653531 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem, removing ...
	I0701 12:27:22.219365  653531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem
	I0701 12:27:22.219402  653531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem (1123 bytes)
	I0701 12:27:22.219472  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem
	I0701 12:27:22.219497  653531 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem, removing ...
	I0701 12:27:22.219503  653531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem
	I0701 12:27:22.219534  653531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem (1675 bytes)
	I0701 12:27:22.219602  653531 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem org=jenkins.ha-735960-m04 san=[127.0.0.1 192.168.39.60 ha-735960-m04 localhost minikube]
	I0701 12:27:22.329827  653531 provision.go:177] copyRemoteCerts
	I0701 12:27:22.329892  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0701 12:27:22.329923  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHHostname
	I0701 12:27:22.332967  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:22.333373  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:22.333406  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:22.333651  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHPort
	I0701 12:27:22.333896  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:22.334062  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHUsername
	I0701 12:27:22.334281  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m04/id_rsa Username:docker}
	I0701 12:27:22.417286  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0701 12:27:22.417383  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0701 12:27:22.441229  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0701 12:27:22.441316  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0701 12:27:22.465192  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0701 12:27:22.465262  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0701 12:27:22.489482  653531 provision.go:87] duration metric: took 277.524425ms to configureAuth
	I0701 12:27:22.489525  653531 buildroot.go:189] setting minikube options for container-runtime
	I0701 12:27:22.489832  653531 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:27:22.489882  653531 main.go:141] libmachine: (ha-735960-m04) Calling .DriverName
	I0701 12:27:22.490191  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHHostname
	I0701 12:27:22.493387  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:22.493808  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:22.493842  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:22.494001  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHPort
	I0701 12:27:22.494272  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:22.494482  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:22.494666  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHUsername
	I0701 12:27:22.494871  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:27:22.495082  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0701 12:27:22.495096  653531 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0701 12:27:22.603693  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0701 12:27:22.603722  653531 buildroot.go:70] root file system type: tmpfs
	I0701 12:27:22.603868  653531 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0701 12:27:22.603921  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHHostname
	I0701 12:27:22.606932  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:22.607406  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:22.607441  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:22.607659  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHPort
	I0701 12:27:22.607881  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:22.608030  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:22.608161  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHUsername
	I0701 12:27:22.608332  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:27:22.608539  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0701 12:27:22.608607  653531 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.39.16"
	Environment="NO_PROXY=192.168.39.16,192.168.39.86"
	Environment="NO_PROXY=192.168.39.16,192.168.39.86,192.168.39.97"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0701 12:27:22.729176  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.39.16
	Environment=NO_PROXY=192.168.39.16,192.168.39.86
	Environment=NO_PROXY=192.168.39.16,192.168.39.86,192.168.39.97
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0701 12:27:22.729234  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHHostname
	I0701 12:27:22.732936  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:22.733425  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:22.733462  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:22.733653  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHPort
	I0701 12:27:22.733908  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:22.734181  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:22.734376  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHUsername
	I0701 12:27:22.734607  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:27:22.734842  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0701 12:27:22.734871  653531 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0701 12:27:24.534039  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0701 12:27:24.534075  653531 machine.go:97] duration metric: took 2.690748128s to provisionDockerMachine
	I0701 12:27:24.534091  653531 start.go:293] postStartSetup for "ha-735960-m04" (driver="kvm2")
	I0701 12:27:24.534104  653531 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0701 12:27:24.534123  653531 main.go:141] libmachine: (ha-735960-m04) Calling .DriverName
	I0701 12:27:24.534499  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0701 12:27:24.534541  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHHostname
	I0701 12:27:24.537254  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:24.537740  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:24.537779  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:24.537959  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHPort
	I0701 12:27:24.538181  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:24.538373  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHUsername
	I0701 12:27:24.538597  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m04/id_rsa Username:docker}
	I0701 12:27:24.622239  653531 ssh_runner.go:195] Run: cat /etc/os-release
	I0701 12:27:24.626566  653531 info.go:137] Remote host: Buildroot 2023.02.9
	I0701 12:27:24.626597  653531 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-630650/.minikube/addons for local assets ...
	I0701 12:27:24.626682  653531 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-630650/.minikube/files for local assets ...
	I0701 12:27:24.626776  653531 filesync.go:149] local asset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> 6378542.pem in /etc/ssl/certs
	I0701 12:27:24.626790  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> /etc/ssl/certs/6378542.pem
	I0701 12:27:24.626899  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0701 12:27:24.638615  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem --> /etc/ssl/certs/6378542.pem (1708 bytes)
	I0701 12:27:24.662568  653531 start.go:296] duration metric: took 128.459164ms for postStartSetup
	I0701 12:27:24.662618  653531 main.go:141] libmachine: (ha-735960-m04) Calling .DriverName
	I0701 12:27:24.663010  653531 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0701 12:27:24.663051  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHHostname
	I0701 12:27:24.665748  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:24.666087  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:24.666114  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:24.666265  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHPort
	I0701 12:27:24.666549  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:24.666727  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHUsername
	I0701 12:27:24.666943  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m04/id_rsa Username:docker}
	I0701 12:27:24.753987  653531 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0701 12:27:24.754081  653531 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0701 12:27:24.791910  653531 fix.go:56] duration metric: took 18.863722464s for fixHost
	I0701 12:27:24.791970  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHHostname
	I0701 12:27:24.795473  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:24.795824  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:24.795860  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:24.796063  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHPort
	I0701 12:27:24.796321  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:24.796518  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:24.796690  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHUsername
	I0701 12:27:24.796892  653531 main.go:141] libmachine: Using SSH client type: native
	I0701 12:27:24.797130  653531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0701 12:27:24.797146  653531 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0701 12:27:24.911069  653531 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719836844.884316737
	
	I0701 12:27:24.911100  653531 fix.go:216] guest clock: 1719836844.884316737
	I0701 12:27:24.911110  653531 fix.go:229] Guest: 2024-07-01 12:27:24.884316737 +0000 UTC Remote: 2024-07-01 12:27:24.791945819 +0000 UTC m=+202.261797488 (delta=92.370918ms)
	I0701 12:27:24.911131  653531 fix.go:200] guest clock delta is within tolerance: 92.370918ms
	I0701 12:27:24.911137  653531 start.go:83] releasing machines lock for "ha-735960-m04", held for 18.982986548s
	I0701 12:27:24.911163  653531 main.go:141] libmachine: (ha-735960-m04) Calling .DriverName
	I0701 12:27:24.911481  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetIP
	I0701 12:27:24.914298  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:24.914691  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:24.914721  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:24.917119  653531 out.go:177] * Found network options:
	I0701 12:27:24.918569  653531 out.go:177]   - NO_PROXY=192.168.39.16,192.168.39.86,192.168.39.97
	W0701 12:27:24.919961  653531 proxy.go:119] fail to check proxy env: Error ip not in block
	W0701 12:27:24.919987  653531 proxy.go:119] fail to check proxy env: Error ip not in block
	W0701 12:27:24.919997  653531 proxy.go:119] fail to check proxy env: Error ip not in block
	I0701 12:27:24.920012  653531 main.go:141] libmachine: (ha-735960-m04) Calling .DriverName
	I0701 12:27:24.920847  653531 main.go:141] libmachine: (ha-735960-m04) Calling .DriverName
	I0701 12:27:24.921063  653531 main.go:141] libmachine: (ha-735960-m04) Calling .DriverName
	I0701 12:27:24.921170  653531 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0701 12:27:24.921210  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHHostname
	W0701 12:27:24.921252  653531 proxy.go:119] fail to check proxy env: Error ip not in block
	W0701 12:27:24.921277  653531 proxy.go:119] fail to check proxy env: Error ip not in block
	W0701 12:27:24.921290  653531 proxy.go:119] fail to check proxy env: Error ip not in block
	I0701 12:27:24.921364  653531 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0701 12:27:24.921385  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHHostname
	I0701 12:27:24.924253  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:24.924561  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:24.924715  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:24.924742  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:24.924933  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHPort
	I0701 12:27:24.925058  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:24.925080  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:24.925110  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:24.925325  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHUsername
	I0701 12:27:24.925339  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHPort
	I0701 12:27:24.925519  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:27:24.925615  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m04/id_rsa Username:docker}
	I0701 12:27:24.925685  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHUsername
	I0701 12:27:24.925840  653531 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m04/id_rsa Username:docker}
	W0701 12:27:25.004044  653531 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0701 12:27:25.004109  653531 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0701 12:27:25.029712  653531 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0701 12:27:25.029746  653531 start.go:494] detecting cgroup driver to use...
	I0701 12:27:25.029880  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 12:27:25.052034  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0701 12:27:25.062847  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0701 12:27:25.073005  653531 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0701 12:27:25.073080  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0701 12:27:25.083300  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0701 12:27:25.093834  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0701 12:27:25.104814  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0701 12:27:25.115006  653531 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0701 12:27:25.126080  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0701 12:27:25.136492  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0701 12:27:25.147986  653531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0701 12:27:25.158638  653531 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0701 12:27:25.168301  653531 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0701 12:27:25.177427  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:27:25.290645  653531 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0701 12:27:25.317946  653531 start.go:494] detecting cgroup driver to use...
	I0701 12:27:25.318090  653531 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0701 12:27:25.333522  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 12:27:25.349308  653531 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0701 12:27:25.366057  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 12:27:25.379554  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 12:27:25.393005  653531 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0701 12:27:25.427883  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 12:27:25.443710  653531 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 12:27:25.462653  653531 ssh_runner.go:195] Run: which cri-dockerd
	I0701 12:27:25.466440  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0701 12:27:25.475817  653531 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0701 12:27:25.491900  653531 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0701 12:27:25.609810  653531 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0701 12:27:25.736607  653531 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0701 12:27:25.736666  653531 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0701 12:27:25.753218  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:27:25.872913  653531 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0701 12:27:28.274644  653531 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.401692528s)
	I0701 12:27:28.274730  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0701 12:27:28.288270  653531 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0701 12:27:28.306360  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0701 12:27:28.320063  653531 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0701 12:27:28.444909  653531 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0701 12:27:28.582500  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:27:28.708064  653531 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0701 12:27:28.728173  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0701 12:27:28.743660  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:27:28.873765  653531 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0701 12:27:28.960958  653531 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0701 12:27:28.961063  653531 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0701 12:27:28.967089  653531 start.go:562] Will wait 60s for crictl version
	I0701 12:27:28.967205  653531 ssh_runner.go:195] Run: which crictl
	I0701 12:27:28.971404  653531 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0701 12:27:29.011615  653531 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.1
	RuntimeApiVersion:  v1
	I0701 12:27:29.011699  653531 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0701 12:27:29.041339  653531 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0701 12:27:29.073461  653531 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.1 ...
	I0701 12:27:29.075110  653531 out.go:177]   - env NO_PROXY=192.168.39.16
	I0701 12:27:29.076621  653531 out.go:177]   - env NO_PROXY=192.168.39.16,192.168.39.86
	I0701 12:27:29.078186  653531 out.go:177]   - env NO_PROXY=192.168.39.16,192.168.39.86,192.168.39.97
	I0701 12:27:29.079949  653531 main.go:141] libmachine: (ha-735960-m04) Calling .GetIP
	I0701 12:27:29.083268  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:29.083683  653531 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:27:16 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:27:29.083711  653531 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:27:29.084018  653531 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0701 12:27:29.088562  653531 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 12:27:29.105010  653531 mustload.go:65] Loading cluster: ha-735960
	I0701 12:27:29.105303  653531 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:27:29.105654  653531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:27:29.105708  653531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:27:29.121628  653531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39389
	I0701 12:27:29.122222  653531 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:27:29.122816  653531 main.go:141] libmachine: Using API Version  1
	I0701 12:27:29.122844  653531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:27:29.123210  653531 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:27:29.123475  653531 main.go:141] libmachine: (ha-735960) Calling .GetState
	I0701 12:27:29.125364  653531 host.go:66] Checking if "ha-735960" exists ...
	I0701 12:27:29.125670  653531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:27:29.125708  653531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:27:29.141532  653531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36053
	I0701 12:27:29.142051  653531 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:27:29.142638  653531 main.go:141] libmachine: Using API Version  1
	I0701 12:27:29.142662  653531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:27:29.143010  653531 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:27:29.143254  653531 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:27:29.143488  653531 certs.go:68] Setting up /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960 for IP: 192.168.39.60
	I0701 12:27:29.143501  653531 certs.go:194] generating shared ca certs ...
	I0701 12:27:29.143518  653531 certs.go:226] acquiring lock for ca certs: {Name:mk34e166bfd069e523b2325e14d1812c523bff53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:27:29.143646  653531 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key
	I0701 12:27:29.143686  653531 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key
	I0701 12:27:29.143702  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0701 12:27:29.143722  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0701 12:27:29.143739  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0701 12:27:29.143757  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0701 12:27:29.143817  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem (1338 bytes)
	W0701 12:27:29.143851  653531 certs.go:480] ignoring /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854_empty.pem, impossibly tiny 0 bytes
	I0701 12:27:29.143871  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem (1675 bytes)
	I0701 12:27:29.143894  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem (1078 bytes)
	I0701 12:27:29.143916  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem (1123 bytes)
	I0701 12:27:29.143937  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem (1675 bytes)
	I0701 12:27:29.143972  653531 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem (1708 bytes)
	I0701 12:27:29.144004  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:27:29.144021  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem -> /usr/share/ca-certificates/637854.pem
	I0701 12:27:29.144041  653531 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> /usr/share/ca-certificates/6378542.pem
	I0701 12:27:29.144072  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0701 12:27:29.171419  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0701 12:27:29.196509  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0701 12:27:29.222599  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0701 12:27:29.248989  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0701 12:27:29.275034  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem --> /usr/share/ca-certificates/637854.pem (1338 bytes)
	I0701 12:27:29.300102  653531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem --> /usr/share/ca-certificates/6378542.pem (1708 bytes)
	I0701 12:27:29.327329  653531 ssh_runner.go:195] Run: openssl version
	I0701 12:27:29.333121  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0701 12:27:29.344555  653531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:27:29.349319  653531 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  1 12:05 /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:27:29.349394  653531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:27:29.355247  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0701 12:27:29.366285  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/637854.pem && ln -fs /usr/share/ca-certificates/637854.pem /etc/ssl/certs/637854.pem"
	I0701 12:27:29.376931  653531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/637854.pem
	I0701 12:27:29.381303  653531 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  1 12:11 /usr/share/ca-certificates/637854.pem
	I0701 12:27:29.381385  653531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/637854.pem
	I0701 12:27:29.387458  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/637854.pem /etc/ssl/certs/51391683.0"
	I0701 12:27:29.398343  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6378542.pem && ln -fs /usr/share/ca-certificates/6378542.pem /etc/ssl/certs/6378542.pem"
	I0701 12:27:29.409321  653531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6378542.pem
	I0701 12:27:29.414299  653531 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  1 12:11 /usr/share/ca-certificates/6378542.pem
	I0701 12:27:29.414400  653531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6378542.pem
	I0701 12:27:29.420975  653531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6378542.pem /etc/ssl/certs/3ec20f2e.0"
	I0701 12:27:29.434286  653531 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0701 12:27:29.438767  653531 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0701 12:27:29.438817  653531 kubeadm.go:928] updating node {m04 192.168.39.60 0 v1.30.2 docker false true} ...
	I0701 12:27:29.438918  653531 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-735960-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.60
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-735960 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0701 12:27:29.438988  653531 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0701 12:27:29.450811  653531 binaries.go:44] Found k8s binaries, skipping transfer
	I0701 12:27:29.450895  653531 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0701 12:27:29.462511  653531 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0701 12:27:29.480246  653531 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0701 12:27:29.497624  653531 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0701 12:27:29.502554  653531 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 12:27:29.515005  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:27:29.648948  653531 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0701 12:27:29.668809  653531 start.go:234] Will wait 6m0s for node &{Name:m04 IP:192.168.39.60 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0701 12:27:29.669186  653531 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:27:29.671772  653531 out.go:177] * Verifying Kubernetes components...
	I0701 12:27:29.673288  653531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:27:29.823420  653531 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0701 12:27:29.839349  653531 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19166-630650/kubeconfig
	I0701 12:27:29.839675  653531 kapi.go:59] client config for ha-735960: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/client.crt", KeyFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/client.key", CAFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfbb80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0701 12:27:29.839746  653531 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.16:8443
	I0701 12:27:29.840001  653531 node_ready.go:35] waiting up to 6m0s for node "ha-735960-m04" to be "Ready" ...
	I0701 12:27:29.840108  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m04
	I0701 12:27:29.840118  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:29.840130  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:29.840138  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:29.843740  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:30.340654  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m04
	I0701 12:27:30.340679  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:30.340687  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:30.340691  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:30.344079  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:30.344547  653531 node_ready.go:49] node "ha-735960-m04" has status "Ready":"True"
	I0701 12:27:30.344570  653531 node_ready.go:38] duration metric: took 504.547887ms for node "ha-735960-m04" to be "Ready" ...
	I0701 12:27:30.344579  653531 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 12:27:30.344650  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods
	I0701 12:27:30.344660  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:30.344668  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:30.344675  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:30.351108  653531 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0701 12:27:30.358660  653531 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:30.358749  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nk4lf
	I0701 12:27:30.358758  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:30.358766  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:30.358771  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:30.362032  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:30.362784  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:27:30.362802  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:30.362812  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:30.362816  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:30.365450  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:27:30.365914  653531 pod_ready.go:92] pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:30.365936  653531 pod_ready.go:81] duration metric: took 7.248792ms for pod "coredns-7db6d8ff4d-nk4lf" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:30.365949  653531 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-p4rtz" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:30.366016  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-p4rtz
	I0701 12:27:30.366025  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:30.366035  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:30.366043  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:30.368928  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:27:30.369820  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:27:30.369836  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:30.369843  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:30.369858  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:30.373004  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:30.373769  653531 pod_ready.go:92] pod "coredns-7db6d8ff4d-p4rtz" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:30.373785  653531 pod_ready.go:81] duration metric: took 7.830149ms for pod "coredns-7db6d8ff4d-p4rtz" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:30.373794  653531 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:30.373848  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960
	I0701 12:27:30.373856  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:30.373862  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:30.373867  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:30.376565  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:27:30.377340  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:27:30.377356  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:30.377363  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:30.377367  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:30.379523  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:27:30.379966  653531 pod_ready.go:92] pod "etcd-ha-735960" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:30.379982  653531 pod_ready.go:81] duration metric: took 6.178731ms for pod "etcd-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:30.379991  653531 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:30.380048  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m02
	I0701 12:27:30.380055  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:30.380062  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:30.380069  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:30.382485  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:27:30.383125  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:27:30.383141  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:30.383148  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:30.383155  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:30.385845  653531 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 12:27:30.386599  653531 pod_ready.go:92] pod "etcd-ha-735960-m02" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:30.386616  653531 pod_ready.go:81] duration metric: took 6.619715ms for pod "etcd-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:30.386624  653531 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:30.541077  653531 request.go:629] Waited for 154.380092ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:27:30.541196  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/etcd-ha-735960-m03
	I0701 12:27:30.541207  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:30.541219  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:30.541229  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:30.544660  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:30.740754  653531 request.go:629] Waited for 195.337132ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:30.740847  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:30.740857  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:30.740865  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:30.740869  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:30.744492  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:30.745072  653531 pod_ready.go:92] pod "etcd-ha-735960-m03" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:30.745094  653531 pod_ready.go:81] duration metric: took 358.462325ms for pod "etcd-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:30.745123  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:30.941364  653531 request.go:629] Waited for 196.100673ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960
	I0701 12:27:30.941453  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960
	I0701 12:27:30.941466  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:30.941477  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:30.941487  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:30.946577  653531 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0701 12:27:31.140711  653531 request.go:629] Waited for 193.223112ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:27:31.140788  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:27:31.140793  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:31.140800  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:31.140804  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:31.146571  653531 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0701 12:27:31.147245  653531 pod_ready.go:92] pod "kube-apiserver-ha-735960" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:31.147269  653531 pod_ready.go:81] duration metric: took 402.135058ms for pod "kube-apiserver-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:31.147280  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:31.341367  653531 request.go:629] Waited for 193.988845ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m02
	I0701 12:27:31.341477  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m02
	I0701 12:27:31.341489  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:31.341500  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:31.341508  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:31.345561  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:27:31.540709  653531 request.go:629] Waited for 194.115472ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:27:31.540784  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:27:31.540789  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:31.540797  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:31.540800  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:31.544920  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:27:31.545652  653531 pod_ready.go:92] pod "kube-apiserver-ha-735960-m02" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:31.545679  653531 pod_ready.go:81] duration metric: took 398.391166ms for pod "kube-apiserver-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:31.545689  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:31.741170  653531 request.go:629] Waited for 195.369232ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:27:31.741243  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-735960-m03
	I0701 12:27:31.741251  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:31.741261  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:31.741272  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:31.745382  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:27:31.941422  653531 request.go:629] Waited for 195.397431ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:31.941512  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:31.941517  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:31.941526  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:31.941531  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:31.945358  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:31.945947  653531 pod_ready.go:92] pod "kube-apiserver-ha-735960-m03" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:31.945971  653531 pod_ready.go:81] duration metric: took 400.276204ms for pod "kube-apiserver-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:31.945982  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:32.140926  653531 request.go:629] Waited for 194.860847ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-735960
	I0701 12:27:32.141014  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-735960
	I0701 12:27:32.141023  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:32.141048  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:32.141058  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:32.146741  653531 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0701 12:27:32.341040  653531 request.go:629] Waited for 193.334578ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:27:32.341112  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:27:32.341117  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:32.341126  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:32.341132  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:32.344664  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:32.345182  653531 pod_ready.go:92] pod "kube-controller-manager-ha-735960" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:32.345200  653531 pod_ready.go:81] duration metric: took 399.209545ms for pod "kube-controller-manager-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:32.345210  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:32.541314  653531 request.go:629] Waited for 196.016373ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-735960-m02
	I0701 12:27:32.541395  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-735960-m02
	I0701 12:27:32.541402  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:32.541414  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:32.541424  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:32.545663  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:27:32.741118  653531 request.go:629] Waited for 194.597088ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:27:32.741201  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:27:32.741209  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:32.741220  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:32.741228  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:32.745051  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:32.745612  653531 pod_ready.go:92] pod "kube-controller-manager-ha-735960-m02" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:32.745636  653531 pod_ready.go:81] duration metric: took 400.417224ms for pod "kube-controller-manager-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:32.745651  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:32.941594  653531 request.go:629] Waited for 195.859048ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-735960-m03
	I0701 12:27:32.941697  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-735960-m03
	I0701 12:27:32.941704  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:32.941712  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:32.941720  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:32.945661  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:33.140796  653531 request.go:629] Waited for 194.297237ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:33.140872  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:33.140881  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:33.140892  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:33.140902  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:33.148523  653531 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0701 12:27:33.149119  653531 pod_ready.go:92] pod "kube-controller-manager-ha-735960-m03" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:33.149229  653531 pod_ready.go:81] duration metric: took 403.561455ms for pod "kube-controller-manager-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:33.149274  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-25ssf" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:33.341103  653531 request.go:629] Waited for 191.712414ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-25ssf
	I0701 12:27:33.341203  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-25ssf
	I0701 12:27:33.341211  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:33.341222  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:33.341236  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:33.345005  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:33.541118  653531 request.go:629] Waited for 195.201433ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m04
	I0701 12:27:33.541195  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m04
	I0701 12:27:33.541202  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:33.541212  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:33.541220  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:33.544937  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:33.546208  653531 pod_ready.go:92] pod "kube-proxy-25ssf" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:33.546231  653531 pod_ready.go:81] duration metric: took 396.932438ms for pod "kube-proxy-25ssf" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:33.546244  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-776rt" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:33.741353  653531 request.go:629] Waited for 195.026851ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-776rt
	I0701 12:27:33.741456  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-776rt
	I0701 12:27:33.741466  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:33.741475  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:33.741481  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:33.745239  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:33.941300  653531 request.go:629] Waited for 195.397929ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:33.941381  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:33.941388  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:33.941399  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:33.941408  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:33.944917  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:33.945530  653531 pod_ready.go:92] pod "kube-proxy-776rt" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:33.945551  653531 pod_ready.go:81] duration metric: took 399.299813ms for pod "kube-proxy-776rt" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:33.945565  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b6knb" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:34.140984  653531 request.go:629] Waited for 195.324742ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b6knb
	I0701 12:27:34.141050  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b6knb
	I0701 12:27:34.141055  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:34.141063  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:34.141075  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:34.144882  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:34.341131  653531 request.go:629] Waited for 195.426765ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:27:34.341198  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:27:34.341203  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:34.341211  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:34.341215  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:34.344938  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:34.345533  653531 pod_ready.go:92] pod "kube-proxy-b6knb" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:34.345554  653531 pod_ready.go:81] duration metric: took 399.982623ms for pod "kube-proxy-b6knb" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:34.345563  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lphzn" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:34.540691  653531 request.go:629] Waited for 195.046851ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lphzn
	I0701 12:27:34.540777  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lphzn
	I0701 12:27:34.540782  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:34.540794  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:34.540798  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:34.544410  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:34.741782  653531 request.go:629] Waited for 196.474041ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:27:34.741851  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:27:34.741856  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:34.741864  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:34.741869  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:34.745447  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:34.746289  653531 pod_ready.go:92] pod "kube-proxy-lphzn" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:34.746312  653531 pod_ready.go:81] duration metric: took 400.742893ms for pod "kube-proxy-lphzn" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:34.746344  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:34.941411  653531 request.go:629] Waited for 194.97877ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960
	I0701 12:27:34.941489  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960
	I0701 12:27:34.941495  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:34.941502  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:34.941510  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:34.944984  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:35.141079  653531 request.go:629] Waited for 195.409668ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:27:35.141163  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960
	I0701 12:27:35.141168  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:35.141176  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:35.141194  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:35.144737  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:35.145431  653531 pod_ready.go:92] pod "kube-scheduler-ha-735960" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:35.145471  653531 pod_ready.go:81] duration metric: took 399.115782ms for pod "kube-scheduler-ha-735960" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:35.145485  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:35.341554  653531 request.go:629] Waited for 195.979537ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960-m02
	I0701 12:27:35.341639  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960-m02
	I0701 12:27:35.341650  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:35.341661  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:35.341672  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:35.345199  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:35.541252  653531 request.go:629] Waited for 195.403848ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:27:35.541340  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m02
	I0701 12:27:35.541346  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:35.541354  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:35.541362  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:35.545398  653531 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 12:27:35.546010  653531 pod_ready.go:92] pod "kube-scheduler-ha-735960-m02" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:35.546037  653531 pod_ready.go:81] duration metric: took 400.543297ms for pod "kube-scheduler-ha-735960-m02" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:35.546051  653531 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:35.741442  653531 request.go:629] Waited for 195.294004ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960-m03
	I0701 12:27:35.741533  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-735960-m03
	I0701 12:27:35.741541  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:35.741553  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:35.741565  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:35.744725  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:35.940687  653531 request.go:629] Waited for 195.284608ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:35.940760  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes/ha-735960-m03
	I0701 12:27:35.940766  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:35.940776  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:35.940783  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:35.944482  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:35.945011  653531 pod_ready.go:92] pod "kube-scheduler-ha-735960-m03" in "kube-system" namespace has status "Ready":"True"
	I0701 12:27:35.945032  653531 pod_ready.go:81] duration metric: took 398.973476ms for pod "kube-scheduler-ha-735960-m03" in "kube-system" namespace to be "Ready" ...
	I0701 12:27:35.945048  653531 pod_ready.go:38] duration metric: took 5.600458409s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 12:27:35.945074  653531 system_svc.go:44] waiting for kubelet service to be running ....
	I0701 12:27:35.945143  653531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 12:27:35.962762  653531 system_svc.go:56] duration metric: took 17.680549ms WaitForService to wait for kubelet
	I0701 12:27:35.962795  653531 kubeadm.go:576] duration metric: took 6.293928606s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 12:27:35.962817  653531 node_conditions.go:102] verifying NodePressure condition ...
	I0701 12:27:36.141286  653531 request.go:629] Waited for 178.366419ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.16:8443/api/v1/nodes
	I0701 12:27:36.141375  653531 round_trippers.go:463] GET https://192.168.39.16:8443/api/v1/nodes
	I0701 12:27:36.141382  653531 round_trippers.go:469] Request Headers:
	I0701 12:27:36.141394  653531 round_trippers.go:473]     Accept: application/json, */*
	I0701 12:27:36.141404  653531 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0701 12:27:36.145426  653531 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 12:27:36.146951  653531 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0701 12:27:36.146977  653531 node_conditions.go:123] node cpu capacity is 2
	I0701 12:27:36.146989  653531 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0701 12:27:36.146992  653531 node_conditions.go:123] node cpu capacity is 2
	I0701 12:27:36.146996  653531 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0701 12:27:36.146999  653531 node_conditions.go:123] node cpu capacity is 2
	I0701 12:27:36.147001  653531 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0701 12:27:36.147004  653531 node_conditions.go:123] node cpu capacity is 2
	I0701 12:27:36.147009  653531 node_conditions.go:105] duration metric: took 184.187151ms to run NodePressure ...
	I0701 12:27:36.147024  653531 start.go:240] waiting for startup goroutines ...
	I0701 12:27:36.147054  653531 start.go:254] writing updated cluster config ...
	I0701 12:27:36.147403  653531 ssh_runner.go:195] Run: rm -f paused
	I0701 12:27:36.201170  653531 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0701 12:27:36.203376  653531 out.go:177] * Done! kubectl is now configured to use "ha-735960" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jul 01 12:25:13 ha-735960 cri-dockerd[1398]: time="2024-07-01T12:25:13Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Jul 01 12:25:23 ha-735960 dockerd[1125]: time="2024-07-01T12:25:23.366654170Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 01 12:25:23 ha-735960 dockerd[1125]: time="2024-07-01T12:25:23.366710385Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 01 12:25:23 ha-735960 dockerd[1125]: time="2024-07-01T12:25:23.366723641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 12:25:23 ha-735960 dockerd[1125]: time="2024-07-01T12:25:23.367696676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 12:25:23 ha-735960 dockerd[1125]: time="2024-07-01T12:25:23.388479723Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 01 12:25:23 ha-735960 dockerd[1125]: time="2024-07-01T12:25:23.388593936Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 01 12:25:23 ha-735960 dockerd[1125]: time="2024-07-01T12:25:23.389018347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 12:25:23 ha-735960 dockerd[1125]: time="2024-07-01T12:25:23.389381366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 12:25:23 ha-735960 dockerd[1125]: time="2024-07-01T12:25:23.390771396Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 01 12:25:23 ha-735960 dockerd[1125]: time="2024-07-01T12:25:23.391192786Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 01 12:25:23 ha-735960 dockerd[1125]: time="2024-07-01T12:25:23.391291548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 12:25:23 ha-735960 dockerd[1125]: time="2024-07-01T12:25:23.391685449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 12:25:24 ha-735960 dockerd[1125]: time="2024-07-01T12:25:24.321168284Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 01 12:25:24 ha-735960 dockerd[1125]: time="2024-07-01T12:25:24.321255362Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 01 12:25:24 ha-735960 dockerd[1125]: time="2024-07-01T12:25:24.321269990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 12:25:24 ha-735960 dockerd[1125]: time="2024-07-01T12:25:24.321347198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 12:25:25 ha-735960 dockerd[1125]: time="2024-07-01T12:25:25.309227018Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 01 12:25:25 ha-735960 dockerd[1125]: time="2024-07-01T12:25:25.309334545Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 01 12:25:25 ha-735960 dockerd[1125]: time="2024-07-01T12:25:25.309346230Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 12:25:25 ha-735960 dockerd[1125]: time="2024-07-01T12:25:25.309972461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 12:25:26 ha-735960 dockerd[1125]: time="2024-07-01T12:25:26.350220788Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 01 12:25:26 ha-735960 dockerd[1125]: time="2024-07-01T12:25:26.350306647Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 01 12:25:26 ha-735960 dockerd[1125]: time="2024-07-01T12:25:26.350329844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 12:25:26 ha-735960 dockerd[1125]: time="2024-07-01T12:25:26.350448560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	51a34f4432461       6e38f40d628db                                                                                         3 minutes ago       Running             storage-provisioner       1                   d2dc46de092d5       storage-provisioner
	bf788c37e0912       ac1c61439df46                                                                                         3 minutes ago       Running             kindnet-cni               1                   afbde11b8a740       kindnet-7f6hm
	8cdf2026ed072       8c811b4aec35f                                                                                         3 minutes ago       Running             busybox                   1                   7d907d7b28c98       busybox-fc5497c4f-pjfcw
	710f5c3a9f856       53c535741fb44                                                                                         3 minutes ago       Running             kube-proxy                1                   e49ff3fb80595       kube-proxy-lphzn
	61dc29970290b       cbb01a7bd410d                                                                                         3 minutes ago       Running             coredns                   1                   de1daec45ac89       coredns-7db6d8ff4d-p4rtz
	4a151786b08f5       cbb01a7bd410d                                                                                         3 minutes ago       Running             coredns                   1                   26981372e6136       coredns-7db6d8ff4d-nk4lf
	8ee3e44a43c3b       56ce0fd9fb532                                                                                         4 minutes ago       Running             kube-apiserver            5                   1b92afc0e4763       kube-apiserver-ha-735960
	67dc946c8c45c       e874818b3caac                                                                                         4 minutes ago       Running             kube-controller-manager   5                   3379ae4b4d689       kube-controller-manager-ha-735960
	1c046b029aa4a       38af8ddebf499                                                                                         4 minutes ago       Running             kube-vip                  1                   32c93b266a82d       kube-vip-ha-735960
	693eb0b8f5d78       7820c83aa1394                                                                                         4 minutes ago       Running             kube-scheduler            2                   ec2e5d106b539       kube-scheduler-ha-735960
	ec2c061093f10       e874818b3caac                                                                                         4 minutes ago       Exited              kube-controller-manager   4                   3379ae4b4d689       kube-controller-manager-ha-735960
	852492f61fee7       3861cfcd7c04c                                                                                         4 minutes ago       Running             etcd                      2                   c9044136ea747       etcd-ha-735960
	a3cb59ee8d572       56ce0fd9fb532                                                                                         4 minutes ago       Exited              kube-apiserver            4                   1b92afc0e4763       kube-apiserver-ha-735960
	cecb3dd12e16e       38af8ddebf499                                                                                         7 minutes ago       Exited              kube-vip                  0                   8d1562fb4b8c3       kube-vip-ha-735960
	6a200a6b49020       3861cfcd7c04c                                                                                         7 minutes ago       Exited              etcd                      1                   5b1097d48d724       etcd-ha-735960
	2d71437c5f06d       7820c83aa1394                                                                                         7 minutes ago       Exited              kube-scheduler            1                   fa7dea6a1b8bd       kube-scheduler-ha-735960
	1ef6d9da6a9c5       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   10 minutes ago      Exited              busybox                   0                   1f5ccc7b0e655       busybox-fc5497c4f-pjfcw
	a9c30cd4b3455       cbb01a7bd410d                                                                                         12 minutes ago      Exited              coredns                   0                   7b4b4f7ec4b63       coredns-7db6d8ff4d-nk4lf
	769b0b8751350       cbb01a7bd410d                                                                                         12 minutes ago      Exited              coredns                   0                   7a349370d4f88       coredns-7db6d8ff4d-p4rtz
	f472aef5302fd       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              12 minutes ago      Exited              kindnet-cni               0                   ab9c74a502295       kindnet-7f6hm
	6116abe6039dc       53c535741fb44                                                                                         13 minutes ago      Exited              kube-proxy                0                   da69191059798       kube-proxy-lphzn
	
	
	==> coredns [4a151786b08f] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:47509 - 49224 "HINFO IN 6979381009676685748.1822735874857968465. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.033568754s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[177456986]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (01-Jul-2024 12:25:23.743) (total time: 30001ms):
	Trace[177456986]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (12:25:53.744)
	Trace[177456986]: [30.001445665s] [30.001445665s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[947462717]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (01-Jul-2024 12:25:23.742) (total time: 30003ms):
	Trace[947462717]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (12:25:53.743)
	Trace[947462717]: [30.0032009s] [30.0032009s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[886534813]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (01-Jul-2024 12:25:23.742) (total time: 30004ms):
	Trace[886534813]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (12:25:53.745)
	Trace[886534813]: [30.004749172s] [30.004749172s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [61dc29970290] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:49574 - 32592 "HINFO IN 7534101530096432962.1842168600618500663. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017366932s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[2027452150]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (01-Jul-2024 12:25:23.742) (total time: 30003ms):
	Trace[2027452150]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (12:25:53.743)
	Trace[2027452150]: [30.003896779s] [30.003896779s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[222503702]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (01-Jul-2024 12:25:23.743) (total time: 30003ms):
	Trace[222503702]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (12:25:53.744)
	Trace[222503702]: [30.003901467s] [30.003901467s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1950728267]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (01-Jul-2024 12:25:23.742) (total time: 30005ms):
	Trace[1950728267]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (12:25:53.745)
	Trace[1950728267]: [30.005235099s] [30.005235099s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [769b0b875135] <==
	[INFO] 10.244.1.2:44221 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000082797s
	[INFO] 10.244.2.2:33797 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157729s
	[INFO] 10.244.2.2:52590 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004055351s
	[INFO] 10.244.2.2:46983 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003253494s
	[INFO] 10.244.2.2:56187 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000205215s
	[INFO] 10.244.2.2:41086 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000158307s
	[INFO] 10.244.0.4:47783 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000097077s
	[INFO] 10.244.0.4:50743 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001523s
	[INFO] 10.244.0.4:37141 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000138763s
	[INFO] 10.244.1.2:32981 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000132906s
	[INFO] 10.244.1.2:36762 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001646552s
	[INFO] 10.244.1.2:33583 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000072434s
	[INFO] 10.244.2.2:37027 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000156518s
	[INFO] 10.244.2.2:58435 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000104504s
	[INFO] 10.244.2.2:36107 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090251s
	[INFO] 10.244.0.4:44792 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000227164s
	[INFO] 10.244.0.4:56557 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000140925s
	[INFO] 10.244.1.2:38284 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000232717s
	[INFO] 10.244.2.2:37664 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135198s
	[INFO] 10.244.2.2:60876 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00032392s
	[INFO] 10.244.1.2:37461 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133264s
	[INFO] 10.244.1.2:45182 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000117372s
	[INFO] 10.244.1.2:37156 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000240093s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a9c30cd4b345] <==
	[INFO] 10.244.0.4:57095 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002251804s
	[INFO] 10.244.0.4:42381 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000081215s
	[INFO] 10.244.0.4:53499 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00124929s
	[INFO] 10.244.0.4:41287 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000174281s
	[INFO] 10.244.0.4:36433 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000142863s
	[INFO] 10.244.1.2:47688 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130034s
	[INFO] 10.244.1.2:40562 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00183587s
	[INFO] 10.244.1.2:35137 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000771s
	[INFO] 10.244.1.2:37798 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000184282s
	[INFO] 10.244.1.2:43876 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008807s
	[INFO] 10.244.2.2:35039 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119303s
	[INFO] 10.244.0.4:53229 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000090292s
	[INFO] 10.244.0.4:42097 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00011308s
	[INFO] 10.244.1.2:42114 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130767s
	[INFO] 10.244.1.2:56638 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000110707s
	[INFO] 10.244.1.2:55805 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000093484s
	[INFO] 10.244.2.2:51675 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000145117s
	[INFO] 10.244.2.2:56838 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000136843s
	[INFO] 10.244.0.4:60951 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000162889s
	[INFO] 10.244.0.4:34776 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000112367s
	[INFO] 10.244.0.4:45397 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000073771s
	[INFO] 10.244.0.4:52372 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000058127s
	[INFO] 10.244.1.2:41033 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000131962s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-735960
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-735960
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f87dc4b1adfa3faf30393d14b8f7fb9acc5e991c
	                    minikube.k8s.io/name=ha-735960
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_01T12_15_55_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Jul 2024 12:15:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-735960
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Jul 2024 12:29:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Jul 2024 12:25:13 +0000   Mon, 01 Jul 2024 12:15:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Jul 2024 12:25:13 +0000   Mon, 01 Jul 2024 12:15:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Jul 2024 12:25:13 +0000   Mon, 01 Jul 2024 12:15:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Jul 2024 12:25:13 +0000   Mon, 01 Jul 2024 12:16:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.16
	  Hostname:    ha-735960
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a500128d5645446baeea5654afbcb060
	  System UUID:                a500128d-5645-446b-aeea-5654afbcb060
	  Boot ID:                    a9ffe936-2356-415e-aa5e-ceedcf15ed72
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-pjfcw              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 coredns-7db6d8ff4d-nk4lf             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-7db6d8ff4d-p4rtz             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-ha-735960                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-7f6hm                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-735960             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-735960    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-lphzn                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-735960             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-735960                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m45s                  kube-proxy       
	  Normal  Starting                 13m                    kube-proxy       
	  Normal  NodeHasSufficientPID     13m                    kubelet          Node ha-735960 status is now: NodeHasSufficientPID
	  Normal  Starting                 13m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m                    kubelet          Node ha-735960 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                    kubelet          Node ha-735960 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           13m                    node-controller  Node ha-735960 event: Registered Node ha-735960 in Controller
	  Normal  NodeReady                12m                    kubelet          Node ha-735960 status is now: NodeReady
	  Normal  RegisteredNode           11m                    node-controller  Node ha-735960 event: Registered Node ha-735960 in Controller
	  Normal  RegisteredNode           10m                    node-controller  Node ha-735960 event: Registered Node ha-735960 in Controller
	  Normal  RegisteredNode           8m34s                  node-controller  Node ha-735960 event: Registered Node ha-735960 in Controller
	  Normal  Starting                 4m43s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m43s (x8 over 4m43s)  kubelet          Node ha-735960 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m43s (x8 over 4m43s)  kubelet          Node ha-735960 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m43s (x7 over 4m43s)  kubelet          Node ha-735960 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m43s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m56s                  node-controller  Node ha-735960 event: Registered Node ha-735960 in Controller
	  Normal  RegisteredNode           3m45s                  node-controller  Node ha-735960 event: Registered Node ha-735960 in Controller
	  Normal  RegisteredNode           2m9s                   node-controller  Node ha-735960 event: Registered Node ha-735960 in Controller
	  Normal  RegisteredNode           13s                    node-controller  Node ha-735960 event: Registered Node ha-735960 in Controller
	
	
	Name:               ha-735960-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-735960-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f87dc4b1adfa3faf30393d14b8f7fb9acc5e991c
	                    minikube.k8s.io/name=ha-735960
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_01T12_17_00_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Jul 2024 12:16:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-735960-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Jul 2024 12:29:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Jul 2024 12:25:08 +0000   Mon, 01 Jul 2024 12:16:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Jul 2024 12:25:08 +0000   Mon, 01 Jul 2024 12:16:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Jul 2024 12:25:08 +0000   Mon, 01 Jul 2024 12:16:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Jul 2024 12:25:08 +0000   Mon, 01 Jul 2024 12:17:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.86
	  Hostname:    ha-735960-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 58cf4e4771994f2084a06f7d76199172
	  System UUID:                58cf4e47-7199-4f20-84a0-6f7d76199172
	  Boot ID:                    41c32de2-f03a-41e4-b332-91dc3dc2ccaf
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-twnb4                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-735960-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-bztzv                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-735960-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-735960-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-b6knb                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-735960-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-735960-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 3m51s                  kube-proxy       
	  Normal   Starting                 8m47s                  kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node ha-735960-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-735960-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-735960-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           12m                    node-controller  Node ha-735960-m02 event: Registered Node ha-735960-m02 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-735960-m02 event: Registered Node ha-735960-m02 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-735960-m02 event: Registered Node ha-735960-m02 in Controller
	  Warning  Rebooted                 8m52s                  kubelet          Node ha-735960-m02 has been rebooted, boot id: 64290a4a-a20d-436b-8567-0d3e8b822776
	  Normal   NodeHasSufficientPID     8m52s                  kubelet          Node ha-735960-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    8m52s                  kubelet          Node ha-735960-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 8m52s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  8m52s                  kubelet          Node ha-735960-m02 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           8m34s                  node-controller  Node ha-735960-m02 event: Registered Node ha-735960-m02 in Controller
	  Normal   Starting                 4m19s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  4m19s (x8 over 4m19s)  kubelet          Node ha-735960-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m19s (x8 over 4m19s)  kubelet          Node ha-735960-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m19s (x7 over 4m19s)  kubelet          Node ha-735960-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  4m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           3m56s                  node-controller  Node ha-735960-m02 event: Registered Node ha-735960-m02 in Controller
	  Normal   RegisteredNode           3m45s                  node-controller  Node ha-735960-m02 event: Registered Node ha-735960-m02 in Controller
	  Normal   RegisteredNode           2m9s                   node-controller  Node ha-735960-m02 event: Registered Node ha-735960-m02 in Controller
	  Normal   RegisteredNode           13s                    node-controller  Node ha-735960-m02 event: Registered Node ha-735960-m02 in Controller
	
	
	Name:               ha-735960-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-735960-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f87dc4b1adfa3faf30393d14b8f7fb9acc5e991c
	                    minikube.k8s.io/name=ha-735960
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_01T12_18_11_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Jul 2024 12:18:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-735960-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Jul 2024 12:29:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Jul 2024 12:26:42 +0000   Mon, 01 Jul 2024 12:26:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Jul 2024 12:26:42 +0000   Mon, 01 Jul 2024 12:26:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Jul 2024 12:26:42 +0000   Mon, 01 Jul 2024 12:26:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Jul 2024 12:26:42 +0000   Mon, 01 Jul 2024 12:26:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.97
	  Hostname:    ha-735960-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 995d5c3b59f847378d8e94e940e73ad6
	  System UUID:                995d5c3b-59f8-4737-8d8e-94e940e73ad6
	  Boot ID:                    bc7ccd53-413f-4b49-a89c-18c93eb90ad9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-cpsct                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-735960-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-2424m                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-735960-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-735960-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-776rt                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-735960-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-735960-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m23s                  kube-proxy       
	  Normal   Starting                 10m                    kube-proxy       
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)      kubelet          Node ha-735960-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)      kubelet          Node ha-735960-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)      kubelet          Node ha-735960-m03 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           11m                    node-controller  Node ha-735960-m03 event: Registered Node ha-735960-m03 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-735960-m03 event: Registered Node ha-735960-m03 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-735960-m03 event: Registered Node ha-735960-m03 in Controller
	  Normal   RegisteredNode           8m35s                  node-controller  Node ha-735960-m03 event: Registered Node ha-735960-m03 in Controller
	  Normal   RegisteredNode           3m57s                  node-controller  Node ha-735960-m03 event: Registered Node ha-735960-m03 in Controller
	  Normal   RegisteredNode           3m46s                  node-controller  Node ha-735960-m03 event: Registered Node ha-735960-m03 in Controller
	  Normal   NodeNotReady             3m17s                  node-controller  Node ha-735960-m03 status is now: NodeNotReady
	  Normal   Starting                 2m28s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m28s (x3 over 2m28s)  kubelet          Node ha-735960-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m28s (x3 over 2m28s)  kubelet          Node ha-735960-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m28s (x3 over 2m28s)  kubelet          Node ha-735960-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m28s (x2 over 2m28s)  kubelet          Node ha-735960-m03 has been rebooted, boot id: bc7ccd53-413f-4b49-a89c-18c93eb90ad9
	  Normal   NodeReady                2m28s (x2 over 2m28s)  kubelet          Node ha-735960-m03 status is now: NodeReady
	  Normal   RegisteredNode           2m10s                  node-controller  Node ha-735960-m03 event: Registered Node ha-735960-m03 in Controller
	  Normal   RegisteredNode           14s                    node-controller  Node ha-735960-m03 event: Registered Node ha-735960-m03 in Controller
	
	
	Name:               ha-735960-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-735960-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f87dc4b1adfa3faf30393d14b8f7fb9acc5e991c
	                    minikube.k8s.io/name=ha-735960
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_01T12_19_10_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Jul 2024 12:19:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-735960-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Jul 2024 12:29:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Jul 2024 12:27:30 +0000   Mon, 01 Jul 2024 12:27:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Jul 2024 12:27:30 +0000   Mon, 01 Jul 2024 12:27:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Jul 2024 12:27:30 +0000   Mon, 01 Jul 2024 12:27:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Jul 2024 12:27:30 +0000   Mon, 01 Jul 2024 12:27:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.60
	  Hostname:    ha-735960-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fd9ce62e425d4b9a9ba9ce7045362f6f
	  System UUID:                fd9ce62e-425d-4b9a-9ba9-ce7045362f6f
	  Boot ID:                    ac395c38-b578-4b7c-8c31-9939ff570d11
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-6gx8s       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-25ssf    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 98s                  kube-proxy       
	  Normal   Starting                 9m53s                kube-proxy       
	  Normal   NodeAllocatableEnforced  10m                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)    kubelet          Node ha-735960-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)    kubelet          Node ha-735960-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)    kubelet          Node ha-735960-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                  node-controller  Node ha-735960-m04 event: Registered Node ha-735960-m04 in Controller
	  Normal   RegisteredNode           9m59s                node-controller  Node ha-735960-m04 event: Registered Node ha-735960-m04 in Controller
	  Normal   RegisteredNode           9m59s                node-controller  Node ha-735960-m04 event: Registered Node ha-735960-m04 in Controller
	  Normal   NodeReady                9m49s                kubelet          Node ha-735960-m04 status is now: NodeReady
	  Normal   RegisteredNode           8m35s                node-controller  Node ha-735960-m04 event: Registered Node ha-735960-m04 in Controller
	  Normal   RegisteredNode           3m57s                node-controller  Node ha-735960-m04 event: Registered Node ha-735960-m04 in Controller
	  Normal   RegisteredNode           3m46s                node-controller  Node ha-735960-m04 event: Registered Node ha-735960-m04 in Controller
	  Normal   NodeNotReady             3m17s                node-controller  Node ha-735960-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           2m10s                node-controller  Node ha-735960-m04 event: Registered Node ha-735960-m04 in Controller
	  Normal   Starting                 101s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  101s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  100s (x2 over 100s)  kubelet          Node ha-735960-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    100s (x2 over 100s)  kubelet          Node ha-735960-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     100s (x2 over 100s)  kubelet          Node ha-735960-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 100s                 kubelet          Node ha-735960-m04 has been rebooted, boot id: ac395c38-b578-4b7c-8c31-9939ff570d11
	  Normal   NodeReady                100s                 kubelet          Node ha-735960-m04 status is now: NodeReady
	  Normal   RegisteredNode           14s                  node-controller  Node ha-735960-m04 event: Registered Node ha-735960-m04 in Controller
	
	
	Name:               ha-735960-m05
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-735960-m05
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f87dc4b1adfa3faf30393d14b8f7fb9acc5e991c
	                    minikube.k8s.io/name=ha-735960
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_01T12_28_42_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Jul 2024 12:28:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-735960-m05
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Jul 2024 12:29:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Jul 2024 12:29:08 +0000   Mon, 01 Jul 2024 12:28:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Jul 2024 12:29:08 +0000   Mon, 01 Jul 2024 12:28:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Jul 2024 12:29:08 +0000   Mon, 01 Jul 2024 12:28:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Jul 2024 12:29:08 +0000   Mon, 01 Jul 2024 12:28:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.36
	  Hostname:    ha-735960-m05
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ac5bfa209102440dba489285dca931bd
	  System UUID:                ac5bfa20-9102-440d-ba48-9285dca931bd
	  Boot ID:                    7cf74a98-f899-47d1-9d91-60652f40aade
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-ha-735960-m05                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         30s
	  kube-system                 kindnet-c7gxg                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      33s
	  kube-system                 kube-apiserver-ha-735960-m05             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30s
	  kube-system                 kube-controller-manager-ha-735960-m05    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30s
	  kube-system                 kube-proxy-7z9kk                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         33s
	  kube-system                 kube-scheduler-ha-735960-m05             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30s
	  kube-system                 kube-vip-ha-735960-m05                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28s                kube-proxy       
	  Normal  NodeHasSufficientMemory  33s (x8 over 33s)  kubelet          Node ha-735960-m05 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    33s (x8 over 33s)  kubelet          Node ha-735960-m05 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     33s (x7 over 33s)  kubelet          Node ha-735960-m05 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  33s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           32s                node-controller  Node ha-735960-m05 event: Registered Node ha-735960-m05 in Controller
	  Normal  RegisteredNode           31s                node-controller  Node ha-735960-m05 event: Registered Node ha-735960-m05 in Controller
	  Normal  RegisteredNode           30s                node-controller  Node ha-735960-m05 event: Registered Node ha-735960-m05 in Controller
	  Normal  RegisteredNode           14s                node-controller  Node ha-735960-m05 event: Registered Node ha-735960-m05 in Controller
	
	
	==> dmesg <==
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050613] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036847] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.466422] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.742414] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.542503] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.890956] systemd-fstab-generator[479]: Ignoring "noauto" option for root device
	[  +0.054969] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.050473] systemd-fstab-generator[491]: Ignoring "noauto" option for root device
	[  +2.186564] systemd-fstab-generator[1047]: Ignoring "noauto" option for root device
	[  +0.281745] systemd-fstab-generator[1084]: Ignoring "noauto" option for root device
	[  +0.110826] systemd-fstab-generator[1096]: Ignoring "noauto" option for root device
	[  +0.123894] systemd-fstab-generator[1110]: Ignoring "noauto" option for root device
	[  +2.248144] kauditd_printk_skb: 195 callbacks suppressed
	[  +0.296890] systemd-fstab-generator[1351]: Ignoring "noauto" option for root device
	[  +0.110572] systemd-fstab-generator[1363]: Ignoring "noauto" option for root device
	[  +0.111234] systemd-fstab-generator[1375]: Ignoring "noauto" option for root device
	[  +0.128120] systemd-fstab-generator[1390]: Ignoring "noauto" option for root device
	[  +0.483978] systemd-fstab-generator[1543]: Ignoring "noauto" option for root device
	[  +6.839985] kauditd_printk_skb: 176 callbacks suppressed
	[ +10.416982] kauditd_printk_skb: 40 callbacks suppressed
	[Jul 1 12:25] kauditd_printk_skb: 30 callbacks suppressed
	[ +36.086285] kauditd_printk_skb: 48 callbacks suppressed
	
	
	==> etcd [6a200a6b4902] <==
	{"level":"info","ts":"2024-07-01T12:23:54.888482Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:54.888629Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:54.888657Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 received MsgPreVoteResp from b6c76b3131c1024 at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:54.888687Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 [logterm: 2, index: 1880] sent MsgPreVote request to 77557cf66c24e9ff at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:54.88881Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 [logterm: 2, index: 1880] sent MsgPreVote request to c77bbbee62c21090 at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:56.288805Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:56.288918Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:56.288952Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 received MsgPreVoteResp from b6c76b3131c1024 at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:56.289018Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 [logterm: 2, index: 1880] sent MsgPreVote request to 77557cf66c24e9ff at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:56.289055Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 [logterm: 2, index: 1880] sent MsgPreVote request to c77bbbee62c21090 at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:57.688608Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:57.688647Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:57.688658Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 received MsgPreVoteResp from b6c76b3131c1024 at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:57.688677Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 [logterm: 2, index: 1880] sent MsgPreVote request to 77557cf66c24e9ff at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:57.688686Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 [logterm: 2, index: 1880] sent MsgPreVote request to c77bbbee62c21090 at term 2"}
	{"level":"warn","ts":"2024-07-01T12:23:57.772826Z","caller":"etcdserver/server.go:2089","msg":"failed to publish local member to cluster through raft","local-member-id":"b6c76b3131c1024","local-member-attributes":"{Name:ha-735960 ClientURLs:[https://192.168.39.16:2379]}","request-path":"/0/members/b6c76b3131c1024/attributes","publish-timeout":"7s","error":"etcdserver: request timed out"}
	{"level":"info","ts":"2024-07-01T12:23:59.088585Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:59.088645Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:59.08866Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 received MsgPreVoteResp from b6c76b3131c1024 at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:59.088676Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 [logterm: 2, index: 1880] sent MsgPreVote request to 77557cf66c24e9ff at term 2"}
	{"level":"info","ts":"2024-07-01T12:23:59.088691Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 [logterm: 2, index: 1880] sent MsgPreVote request to c77bbbee62c21090 at term 2"}
	{"level":"warn","ts":"2024-07-01T12:23:59.821067Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c77bbbee62c21090","rtt":"0s","error":"dial tcp 192.168.39.86:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-01T12:23:59.821149Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"c77bbbee62c21090","rtt":"0s","error":"dial tcp 192.168.39.86:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-01T12:23:59.836394Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"77557cf66c24e9ff","rtt":"0s","error":"dial tcp 192.168.39.97:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-07-01T12:23:59.837603Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"77557cf66c24e9ff","rtt":"0s","error":"dial tcp 192.168.39.97:2380: connect: no route to host"}
	
	
	==> etcd [852492f61fee] <==
	{"level":"info","ts":"2024-07-01T12:28:37.671074Z","caller":"rafthttp/peer.go:133","msg":"starting remote peer","remote-peer-id":"ee1971b4bd9110fc"}
	{"level":"info","ts":"2024-07-01T12:28:37.671397Z","caller":"rafthttp/pipeline.go:72","msg":"started HTTP pipelining with remote peer","local-member-id":"b6c76b3131c1024","remote-peer-id":"ee1971b4bd9110fc"}
	{"level":"info","ts":"2024-07-01T12:28:37.67205Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"b6c76b3131c1024","remote-peer-id":"ee1971b4bd9110fc"}
	{"level":"info","ts":"2024-07-01T12:28:37.672633Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"b6c76b3131c1024","remote-peer-id":"ee1971b4bd9110fc"}
	{"level":"info","ts":"2024-07-01T12:28:37.676276Z","caller":"rafthttp/peer.go:137","msg":"started remote peer","remote-peer-id":"ee1971b4bd9110fc"}
	{"level":"info","ts":"2024-07-01T12:28:37.676412Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b6c76b3131c1024","remote-peer-id":"ee1971b4bd9110fc"}
	{"level":"info","ts":"2024-07-01T12:28:37.676632Z","caller":"rafthttp/transport.go:317","msg":"added remote peer","local-member-id":"b6c76b3131c1024","remote-peer-id":"ee1971b4bd9110fc","remote-peer-urls":["https://192.168.39.36:2380"]}
	{"level":"info","ts":"2024-07-01T12:28:37.677132Z","caller":"etcdserver/server.go:1946","msg":"applied a configuration change through raft","local-member-id":"b6c76b3131c1024","raft-conf-change":"ConfChangeAddLearnerNode","raft-conf-change-node-id":"ee1971b4bd9110fc"}
	{"level":"info","ts":"2024-07-01T12:28:37.677162Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b6c76b3131c1024","remote-peer-id":"ee1971b4bd9110fc"}
	{"level":"warn","ts":"2024-07-01T12:28:38.253452Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"ee1971b4bd9110fc","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2024-07-01T12:28:38.431305Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.36:2380/version","remote-member-id":"ee1971b4bd9110fc","error":"Get \"https://192.168.39.36:2380/version\": dial tcp 192.168.39.36:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-01T12:28:38.43161Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"ee1971b4bd9110fc","error":"Get \"https://192.168.39.36:2380/version\": dial tcp 192.168.39.36:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-01T12:28:38.7509Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"ee1971b4bd9110fc","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2024-07-01T12:28:39.371143Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"ee1971b4bd9110fc"}
	{"level":"info","ts":"2024-07-01T12:28:39.372808Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"b6c76b3131c1024","remote-peer-id":"ee1971b4bd9110fc"}
	{"level":"info","ts":"2024-07-01T12:28:39.373278Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b6c76b3131c1024","to":"ee1971b4bd9110fc","stream-type":"stream Message"}
	{"level":"info","ts":"2024-07-01T12:28:39.373513Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"b6c76b3131c1024","remote-peer-id":"ee1971b4bd9110fc"}
	{"level":"info","ts":"2024-07-01T12:28:39.373472Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b6c76b3131c1024","remote-peer-id":"ee1971b4bd9110fc"}
	{"level":"info","ts":"2024-07-01T12:28:39.442599Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b6c76b3131c1024","to":"ee1971b4bd9110fc","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-01T12:28:39.442655Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"b6c76b3131c1024","remote-peer-id":"ee1971b4bd9110fc"}
	{"level":"warn","ts":"2024-07-01T12:28:39.740284Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"ee1971b4bd9110fc","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2024-07-01T12:28:40.240481Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"ee1971b4bd9110fc","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2024-07-01T12:28:41.251457Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 switched to configuration voters=(823163343393787940 8598916461351987711 14374289268216565904 17156869276533068028)"}
	{"level":"info","ts":"2024-07-01T12:28:41.251925Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"cad58bbf0f3daddf","local-member-id":"b6c76b3131c1024"}
	{"level":"info","ts":"2024-07-01T12:28:41.252172Z","caller":"etcdserver/server.go:1946","msg":"applied a configuration change through raft","local-member-id":"b6c76b3131c1024","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"ee1971b4bd9110fc"}
	
	
	==> kernel <==
	 12:29:10 up 5 min,  0 users,  load average: 0.24, 0.19, 0.10
	Linux ha-735960 5.10.207 #1 SMP Wed Jun 26 19:37:34 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [bf788c37e091] <==
	I0701 12:28:46.598536       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0701 12:28:46.598605       1 main.go:250] Node ha-735960-m04 has CIDR [10.244.3.0/24] 
	I0701 12:28:46.598669       1 main.go:223] Handling node with IPs: map[192.168.39.36:{}]
	I0701 12:28:46.598722       1 main.go:250] Node ha-735960-m05 has CIDR [10.244.4.0/24] 
	I0701 12:28:46.598910       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.4.0/24 Src: <nil> Gw: 192.168.39.36 Flags: [] Table: 0} 
	I0701 12:28:56.614691       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0701 12:28:56.614924       1 main.go:227] handling current node
	I0701 12:28:56.615041       1 main.go:223] Handling node with IPs: map[192.168.39.86:{}]
	I0701 12:28:56.615118       1 main.go:250] Node ha-735960-m02 has CIDR [10.244.1.0/24] 
	I0701 12:28:56.615323       1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
	I0701 12:28:56.615459       1 main.go:250] Node ha-735960-m03 has CIDR [10.244.2.0/24] 
	I0701 12:28:56.615623       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0701 12:28:56.615708       1 main.go:250] Node ha-735960-m04 has CIDR [10.244.3.0/24] 
	I0701 12:28:56.615882       1 main.go:223] Handling node with IPs: map[192.168.39.36:{}]
	I0701 12:28:56.615965       1 main.go:250] Node ha-735960-m05 has CIDR [10.244.4.0/24] 
	I0701 12:29:06.630695       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0701 12:29:06.630718       1 main.go:227] handling current node
	I0701 12:29:06.630729       1 main.go:223] Handling node with IPs: map[192.168.39.86:{}]
	I0701 12:29:06.630733       1 main.go:250] Node ha-735960-m02 has CIDR [10.244.1.0/24] 
	I0701 12:29:06.630894       1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
	I0701 12:29:06.630900       1 main.go:250] Node ha-735960-m03 has CIDR [10.244.2.0/24] 
	I0701 12:29:06.630958       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0701 12:29:06.630962       1 main.go:250] Node ha-735960-m04 has CIDR [10.244.3.0/24] 
	I0701 12:29:06.631001       1 main.go:223] Handling node with IPs: map[192.168.39.36:{}]
	I0701 12:29:06.631004       1 main.go:250] Node ha-735960-m05 has CIDR [10.244.4.0/24] 
	
	
	==> kindnet [f472aef5302f] <==
	I0701 12:20:12.428842       1 main.go:250] Node ha-735960-m04 has CIDR [10.244.3.0/24] 
	I0701 12:20:22.443154       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0701 12:20:22.443292       1 main.go:227] handling current node
	I0701 12:20:22.443323       1 main.go:223] Handling node with IPs: map[192.168.39.86:{}]
	I0701 12:20:22.443388       1 main.go:250] Node ha-735960-m02 has CIDR [10.244.1.0/24] 
	I0701 12:20:22.443605       1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
	I0701 12:20:22.443653       1 main.go:250] Node ha-735960-m03 has CIDR [10.244.2.0/24] 
	I0701 12:20:22.443793       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0701 12:20:22.443836       1 main.go:250] Node ha-735960-m04 has CIDR [10.244.3.0/24] 
	I0701 12:20:32.451395       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0701 12:20:32.451431       1 main.go:227] handling current node
	I0701 12:20:32.451481       1 main.go:223] Handling node with IPs: map[192.168.39.86:{}]
	I0701 12:20:32.451486       1 main.go:250] Node ha-735960-m02 has CIDR [10.244.1.0/24] 
	I0701 12:20:32.451947       1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
	I0701 12:20:32.451980       1 main.go:250] Node ha-735960-m03 has CIDR [10.244.2.0/24] 
	I0701 12:20:32.452873       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0701 12:20:32.453015       1 main.go:250] Node ha-735960-m04 has CIDR [10.244.3.0/24] 
	I0701 12:20:42.470169       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0701 12:20:42.470264       1 main.go:227] handling current node
	I0701 12:20:42.470289       1 main.go:223] Handling node with IPs: map[192.168.39.86:{}]
	I0701 12:20:42.470302       1 main.go:250] Node ha-735960-m02 has CIDR [10.244.1.0/24] 
	I0701 12:20:42.470523       1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
	I0701 12:20:42.470616       1 main.go:250] Node ha-735960-m03 has CIDR [10.244.2.0/24] 
	I0701 12:20:42.470868       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0701 12:20:42.470914       1 main.go:250] Node ha-735960-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [8ee3e44a43c3] <==
	I0701 12:25:11.632913       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0701 12:25:11.645811       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0701 12:25:11.645876       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0701 12:25:11.690103       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0701 12:25:11.690292       1 policy_source.go:224] refreshing policies
	I0701 12:25:11.718179       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0701 12:25:11.726917       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0701 12:25:11.729879       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0701 12:25:11.730212       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0701 12:25:11.730238       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0701 12:25:11.737552       1 shared_informer.go:320] Caches are synced for configmaps
	I0701 12:25:11.751625       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0701 12:25:11.752269       1 aggregator.go:165] initial CRD sync complete...
	I0701 12:25:11.752312       1 autoregister_controller.go:141] Starting autoregister controller
	I0701 12:25:11.752319       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0701 12:25:11.752325       1 cache.go:39] Caches are synced for autoregister controller
	I0701 12:25:11.756015       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0701 12:25:11.757180       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0701 12:25:11.779526       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	W0701 12:25:11.807352       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.86]
	I0701 12:25:11.811699       1 controller.go:615] quota admission added evaluator for: endpoints
	I0701 12:25:11.839496       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0701 12:25:11.843047       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0701 12:25:12.631101       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0701 12:25:13.074615       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.16 192.168.39.86]
	
	
	==> kube-apiserver [a3cb59ee8d57] <==
	I0701 12:24:33.660467       1 options.go:221] external host was not specified, using 192.168.39.16
	I0701 12:24:33.670142       1 server.go:148] Version: v1.30.2
	I0701 12:24:33.670491       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0701 12:24:34.296638       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0701 12:24:34.308879       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0701 12:24:34.324179       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0701 12:24:34.324219       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0701 12:24:34.326894       1 instance.go:299] Using reconciler: lease
	W0701 12:24:54.288105       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0701 12:24:54.289911       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0701 12:24:54.328399       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [67dc946c8c45] <==
	I0701 12:25:24.710493       1 shared_informer.go:320] Caches are synced for stateful set
	I0701 12:25:24.741914       1 shared_informer.go:320] Caches are synced for resource quota
	I0701 12:25:24.771129       1 shared_informer.go:320] Caches are synced for disruption
	I0701 12:25:24.825005       1 shared_informer.go:320] Caches are synced for persistent volume
	I0701 12:25:25.061636       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.968119ms"
	I0701 12:25:25.061928       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.671µs"
	I0701 12:25:25.231337       1 shared_informer.go:320] Caches are synced for garbage collector
	I0701 12:25:25.278015       1 shared_informer.go:320] Caches are synced for garbage collector
	I0701 12:25:25.278079       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0701 12:25:53.073870       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-735960-m04"
	I0701 12:25:53.162214       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.543735ms"
	I0701 12:25:53.163381       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="162.337µs"
	I0701 12:25:59.557437       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="56.6658ms"
	I0701 12:25:59.558362       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="80.196µs"
	I0701 12:25:59.565576       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-s49dr EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-s49dr\": the object has been modified; please apply your changes to the latest version and try again"
	I0701 12:25:59.566070       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"673ce502-ab01-47a0-ad3e-c33bd402b496", APIVersion:"v1", ResourceVersion:"234", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-s49dr EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-s49dr": the object has been modified; please apply your changes to the latest version and try again
	I0701 12:26:43.750974       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="174.579µs"
	I0701 12:26:47.044231       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.968469ms"
	I0701 12:26:47.047107       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="77.336µs"
	I0701 12:27:30.083176       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-735960-m04"
	I0701 12:28:37.391320       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-735960-m04"
	I0701 12:28:37.393588       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-735960-m05\" does not exist"
	I0701 12:28:37.409892       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-735960-m05" podCIDRs=["10.244.4.0/24"]
	I0701 12:28:39.645666       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-735960-m05"
	I0701 12:28:50.194673       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-735960-m04"
	
	
	==> kube-controller-manager [ec2c061093f1] <==
	I0701 12:24:33.938262       1 serving.go:380] Generated self-signed cert in-memory
	I0701 12:24:34.667463       1 controllermanager.go:189] "Starting" version="v1.30.2"
	I0701 12:24:34.667501       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0701 12:24:34.670076       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0701 12:24:34.670322       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0701 12:24:34.670888       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0701 12:24:34.671075       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	E0701 12:24:55.336106       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.16:8443/healthz\": dial tcp 192.168.39.16:8443: connect: connection refused"
	
	
	==> kube-proxy [6116abe6039d] <==
	I0701 12:16:09.205590       1 server_linux.go:69] "Using iptables proxy"
	I0701 12:16:09.223098       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.16"]
	I0701 12:16:09.284088       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0701 12:16:09.284134       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0701 12:16:09.284152       1 server_linux.go:165] "Using iptables Proxier"
	I0701 12:16:09.286802       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0701 12:16:09.287240       1 server.go:872] "Version info" version="v1.30.2"
	I0701 12:16:09.287274       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0701 12:16:09.288803       1 config.go:192] "Starting service config controller"
	I0701 12:16:09.288830       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0701 12:16:09.289262       1 config.go:101] "Starting endpoint slice config controller"
	I0701 12:16:09.289283       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0701 12:16:09.290101       1 config.go:319] "Starting node config controller"
	I0701 12:16:09.290125       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0701 12:16:09.389941       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0701 12:16:09.390030       1 shared_informer.go:320] Caches are synced for service config
	I0701 12:16:09.390393       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [710f5c3a9f85] <==
	I0701 12:25:23.858069       1 server_linux.go:69] "Using iptables proxy"
	I0701 12:25:23.875125       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.16"]
	I0701 12:25:23.958416       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0701 12:25:23.958505       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0701 12:25:23.958526       1 server_linux.go:165] "Using iptables Proxier"
	I0701 12:25:23.963079       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0701 12:25:23.963683       1 server.go:872] "Version info" version="v1.30.2"
	I0701 12:25:23.963707       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0701 12:25:23.967807       1 config.go:192] "Starting service config controller"
	I0701 12:25:23.968544       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0701 12:25:23.968625       1 config.go:101] "Starting endpoint slice config controller"
	I0701 12:25:23.968632       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0701 12:25:23.972994       1 config.go:319] "Starting node config controller"
	I0701 12:25:23.973007       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0701 12:25:24.069380       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0701 12:25:24.069565       1 shared_informer.go:320] Caches are synced for service config
	I0701 12:25:24.073577       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2d71437c5f06] <==
	Trace[1766396451]: [10.001227292s] [10.001227292s] END
	E0701 12:23:38.923742       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.16:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	W0701 12:23:40.712171       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.16:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	E0701 12:23:40.712228       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.16:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	W0701 12:23:40.847258       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.16:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:35008->192.168.39.16:8443: read: connection reset by peer
	I0701 12:23:40.847402       1 trace.go:236] Trace[2065780204]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (01-Jul-2024 12:23:30.463) (total time: 10384ms):
	Trace[2065780204]: ---"Objects listed" error:Get "https://192.168.39.16:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:35008->192.168.39.16:8443: read: connection reset by peer 10384ms (12:23:40.847)
	Trace[2065780204]: [10.384136255s] [10.384136255s] END
	E0701 12:23:40.847432       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.16:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:35008->192.168.39.16:8443: read: connection reset by peer
	W0701 12:23:40.847437       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.16:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:35050->192.168.39.16:8443: read: connection reset by peer
	W0701 12:23:40.847259       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.16:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:35028->192.168.39.16:8443: read: connection reset by peer
	E0701 12:23:40.847495       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.16:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:35050->192.168.39.16:8443: read: connection reset by peer
	E0701 12:23:40.847499       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.16:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:35028->192.168.39.16:8443: read: connection reset by peer
	W0701 12:23:40.847682       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.16:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:35066->192.168.39.16:8443: read: connection reset by peer
	E0701 12:23:40.847714       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.16:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:35066->192.168.39.16:8443: read: connection reset by peer
	W0701 12:23:40.848299       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.16:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:35034->192.168.39.16:8443: read: connection reset by peer
	E0701 12:23:40.848357       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.16:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:35034->192.168.39.16:8443: read: connection reset by peer
	W0701 12:23:51.660283       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.16:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	E0701 12:23:51.660337       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.16:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	W0701 12:23:54.252191       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.16:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	E0701 12:23:54.252565       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.16:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	W0701 12:23:55.679907       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.16:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	E0701 12:23:55.680228       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.16:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	E0701 12:24:00.290141       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	E0701 12:24:00.290379       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [693eb0b8f5d7] <==
	W0701 12:25:05.563752       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.16:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	E0701 12:25:05.563793       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.16:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	W0701 12:25:05.636901       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.16:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	E0701 12:25:05.637119       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.16:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
	W0701 12:25:11.653758       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0701 12:25:11.654470       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0701 12:25:11.654763       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0701 12:25:11.655634       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0701 12:25:11.655894       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0701 12:25:11.655933       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0701 12:25:11.659133       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0701 12:25:11.659348       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0701 12:25:13.850760       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0701 12:28:37.499217       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-qfj9k\": pod kube-proxy-qfj9k is already assigned to node \"ha-735960-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-qfj9k" node="ha-735960-m05"
	E0701 12:28:37.497306       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-tbjq4\": pod kindnet-tbjq4 is already assigned to node \"ha-735960-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-tbjq4" node="ha-735960-m05"
	E0701 12:28:37.502534       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod c7cd6384-ae4d-47ce-b880-302cf834667f(kube-system/kindnet-tbjq4) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-tbjq4"
	E0701 12:28:37.502801       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-tbjq4\": pod kindnet-tbjq4 is already assigned to node \"ha-735960-m05\"" pod="kube-system/kindnet-tbjq4"
	I0701 12:28:37.502972       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-tbjq4" node="ha-735960-m05"
	E0701 12:28:37.503947       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 9cfb7903-e04a-4cdf-b39a-11e890622831(kube-system/kube-proxy-qfj9k) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-qfj9k"
	E0701 12:28:37.503993       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-qfj9k\": pod kube-proxy-qfj9k is already assigned to node \"ha-735960-m05\"" pod="kube-system/kube-proxy-qfj9k"
	I0701 12:28:37.504262       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-qfj9k" node="ha-735960-m05"
	E0701 12:28:37.500193       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-k4d6m\": pod kindnet-k4d6m is already assigned to node \"ha-735960-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-k4d6m" node="ha-735960-m05"
	E0701 12:28:37.505096       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod ccf70e82-9d7c-4c5f-ad9f-d02861ea0794(kube-system/kindnet-k4d6m) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-k4d6m"
	E0701 12:28:37.510144       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-k4d6m\": pod kindnet-k4d6m is already assigned to node \"ha-735960-m05\"" pod="kube-system/kindnet-k4d6m"
	I0701 12:28:37.510199       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-k4d6m" node="ha-735960-m05"
	
	
	==> kubelet <==
	Jul 01 12:25:24 ha-735960 kubelet[1550]: I0701 12:25:24.225255    1550 scope.go:117] "RemoveContainer" containerID="1ef6d9da6a9c5d6e77ef8d42735bdba288502d231394d299243bc1b669822d1c"
	Jul 01 12:25:25 ha-735960 kubelet[1550]: I0701 12:25:25.225212    1550 scope.go:117] "RemoveContainer" containerID="f472aef5302fd01233da1bd769162654c0b238cb1a3b0c9b24deef221c4821a3"
	Jul 01 12:25:26 ha-735960 kubelet[1550]: I0701 12:25:26.229286    1550 scope.go:117] "RemoveContainer" containerID="97d58c94f3fdcc84b84c3c46e6b25f8e7da118d5c9cd53058ae127fe580a40a7"
	Jul 01 12:25:26 ha-735960 kubelet[1550]: E0701 12:25:26.319340    1550 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 01 12:25:26 ha-735960 kubelet[1550]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 01 12:25:26 ha-735960 kubelet[1550]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 01 12:25:26 ha-735960 kubelet[1550]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 01 12:25:26 ha-735960 kubelet[1550]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 01 12:25:26 ha-735960 kubelet[1550]: I0701 12:25:26.443283    1550 scope.go:117] "RemoveContainer" containerID="14112a4d8f2cb5cfea8813c52de120eeef6fe681ebf589fd8708d1557c35b85f"
	Jul 01 12:25:26 ha-735960 kubelet[1550]: I0701 12:25:26.480472    1550 scope.go:117] "RemoveContainer" containerID="97d58c94f3fdcc84b84c3c46e6b25f8e7da118d5c9cd53058ae127fe580a40a7"
	Jul 01 12:26:26 ha-735960 kubelet[1550]: E0701 12:26:26.244909    1550 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 01 12:26:26 ha-735960 kubelet[1550]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 01 12:26:26 ha-735960 kubelet[1550]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 01 12:26:26 ha-735960 kubelet[1550]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 01 12:26:26 ha-735960 kubelet[1550]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 01 12:27:26 ha-735960 kubelet[1550]: E0701 12:27:26.245316    1550 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 01 12:27:26 ha-735960 kubelet[1550]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 01 12:27:26 ha-735960 kubelet[1550]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 01 12:27:26 ha-735960 kubelet[1550]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 01 12:27:26 ha-735960 kubelet[1550]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 01 12:28:26 ha-735960 kubelet[1550]: E0701 12:28:26.245797    1550 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 01 12:28:26 ha-735960 kubelet[1550]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 01 12:28:26 ha-735960 kubelet[1550]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 01 12:28:26 ha-735960 kubelet[1550]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 01 12:28:26 ha-735960 kubelet[1550]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-735960 -n ha-735960
helpers_test.go:261: (dbg) Run:  kubectl --context ha-735960 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (3.55s)

                                                
                                    

Test pass (302/341)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 8.26
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.30.2/json-events 3.29
13 TestDownloadOnly/v1.30.2/preload-exists 0
17 TestDownloadOnly/v1.30.2/LogsDuration 0.06
18 TestDownloadOnly/v1.30.2/DeleteAll 0.14
19 TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.58
22 TestOffline 74.73
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 223.17
29 TestAddons/parallel/Registry 14.42
30 TestAddons/parallel/Ingress 20.38
31 TestAddons/parallel/InspektorGadget 12.11
32 TestAddons/parallel/MetricsServer 5.62
33 TestAddons/parallel/HelmTiller 12.42
35 TestAddons/parallel/CSI 56.98
36 TestAddons/parallel/Headlamp 12.89
37 TestAddons/parallel/CloudSpanner 6.45
38 TestAddons/parallel/LocalPath 54.56
39 TestAddons/parallel/NvidiaDevicePlugin 5.41
40 TestAddons/parallel/Yakd 6.01
41 TestAddons/parallel/Volcano 41.19
44 TestAddons/serial/GCPAuth/Namespaces 0.14
45 TestAddons/StoppedEnableDisable 13.57
46 TestCertOptions 90.91
47 TestCertExpiration 353.3
48 TestDockerFlags 75.38
49 TestForceSystemdFlag 68.18
50 TestForceSystemdEnv 102.93
52 TestKVMDriverInstallOrUpdate 5.02
56 TestErrorSpam/setup 46.7
57 TestErrorSpam/start 0.37
58 TestErrorSpam/status 0.72
59 TestErrorSpam/pause 1.21
60 TestErrorSpam/unpause 1.25
61 TestErrorSpam/stop 14.79
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 75.11
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 40.69
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.08
72 TestFunctional/serial/CacheCmd/cache/add_remote 2.2
73 TestFunctional/serial/CacheCmd/cache/add_local 1.24
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
75 TestFunctional/serial/CacheCmd/cache/list 0.05
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.13
78 TestFunctional/serial/CacheCmd/cache/delete 0.09
79 TestFunctional/serial/MinikubeKubectlCmd 0.11
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
81 TestFunctional/serial/ExtraConfig 42.81
82 TestFunctional/serial/ComponentHealth 0.07
83 TestFunctional/serial/LogsCmd 0.97
84 TestFunctional/serial/LogsFileCmd 1.01
85 TestFunctional/serial/InvalidService 4.41
87 TestFunctional/parallel/ConfigCmd 0.35
88 TestFunctional/parallel/DashboardCmd 14.42
89 TestFunctional/parallel/DryRun 0.3
90 TestFunctional/parallel/InternationalLanguage 0.17
91 TestFunctional/parallel/StatusCmd 0.73
95 TestFunctional/parallel/ServiceCmdConnect 26.64
96 TestFunctional/parallel/AddonsCmd 0.13
97 TestFunctional/parallel/PersistentVolumeClaim 50.52
99 TestFunctional/parallel/SSHCmd 0.42
100 TestFunctional/parallel/CpCmd 1.37
101 TestFunctional/parallel/MySQL 30.76
102 TestFunctional/parallel/FileSync 0.27
103 TestFunctional/parallel/CertSync 1.42
107 TestFunctional/parallel/NodeLabels 0.08
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.23
111 TestFunctional/parallel/License 0.25
112 TestFunctional/parallel/Version/short 0.05
113 TestFunctional/parallel/Version/components 0.71
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.2
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.45
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
118 TestFunctional/parallel/ImageCommands/ImageBuild 3.04
119 TestFunctional/parallel/ImageCommands/Setup 1.3
120 TestFunctional/parallel/ProfileCmd/profile_not_create 0.36
121 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.85
122 TestFunctional/parallel/ProfileCmd/profile_list 0.31
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.27
133 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.62
134 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.71
135 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.48
136 TestFunctional/parallel/ImageCommands/ImageRemove 0.45
137 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.82
138 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.59
139 TestFunctional/parallel/ServiceCmd/DeployApp 15.21
140 TestFunctional/parallel/MountCmd/any-port 6.52
141 TestFunctional/parallel/ServiceCmd/List 1.25
142 TestFunctional/parallel/MountCmd/specific-port 1.91
143 TestFunctional/parallel/ServiceCmd/JSONOutput 1.27
144 TestFunctional/parallel/MountCmd/VerifyCleanup 1.23
145 TestFunctional/parallel/ServiceCmd/HTTPS 0.3
146 TestFunctional/parallel/ServiceCmd/Format 0.34
147 TestFunctional/parallel/ServiceCmd/URL 0.35
148 TestFunctional/parallel/DockerEnv/bash 0.96
149 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
150 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
151 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
152 TestFunctional/delete_addon-resizer_images 0.07
153 TestFunctional/delete_my-image_image 0.02
154 TestFunctional/delete_minikube_cached_images 0.02
155 TestGvisorAddon 209.74
158 TestMultiControlPlane/serial/StartCluster 206.65
159 TestMultiControlPlane/serial/DeployApp 6.13
160 TestMultiControlPlane/serial/PingHostFromPods 1.32
161 TestMultiControlPlane/serial/AddWorkerNode 49.75
162 TestMultiControlPlane/serial/NodeLabels 0.07
163 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.55
164 TestMultiControlPlane/serial/CopyFile 13.03
165 TestMultiControlPlane/serial/StopSecondaryNode 13.16
166 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.39
167 TestMultiControlPlane/serial/RestartSecondaryNode 36.38
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.53
179 TestImageBuild/serial/Setup 49.82
180 TestImageBuild/serial/NormalBuild 1.49
181 TestImageBuild/serial/BuildWithBuildArg 1.03
182 TestImageBuild/serial/BuildWithDockerIgnore 0.41
183 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.29
187 TestJSONOutput/start/Command 63.45
188 TestJSONOutput/start/Audit 0
190 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/pause/Command 0.58
194 TestJSONOutput/pause/Audit 0
196 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/unpause/Command 0.51
200 TestJSONOutput/unpause/Audit 0
202 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
205 TestJSONOutput/stop/Command 7.51
206 TestJSONOutput/stop/Audit 0
208 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
210 TestErrorJSONOutput 0.2
215 TestMainNoArgs 0.04
216 TestMinikubeProfile 100.85
219 TestMountStart/serial/StartWithMountFirst 27.35
220 TestMountStart/serial/VerifyMountFirst 0.39
221 TestMountStart/serial/StartWithMountSecond 29.05
222 TestMountStart/serial/VerifyMountSecond 0.37
223 TestMountStart/serial/DeleteFirst 0.69
224 TestMountStart/serial/VerifyMountPostDelete 0.4
225 TestMountStart/serial/Stop 2.47
226 TestMountStart/serial/RestartStopped 25.91
227 TestMountStart/serial/VerifyMountPostStop 0.37
230 TestMultiNode/serial/FreshStart2Nodes 117.79
231 TestMultiNode/serial/DeployApp2Nodes 4.6
232 TestMultiNode/serial/PingHostFrom2Pods 0.85
233 TestMultiNode/serial/AddNode 48.58
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.21
236 TestMultiNode/serial/CopyFile 7.14
237 TestMultiNode/serial/StopNode 3.27
238 TestMultiNode/serial/StartAfterStop 32.27
239 TestMultiNode/serial/RestartKeepsNodes 260.2
240 TestMultiNode/serial/DeleteNode 2.35
241 TestMultiNode/serial/StopMultiNode 25.1
242 TestMultiNode/serial/RestartMultiNode 87.57
243 TestMultiNode/serial/ValidateNameConflict 48.99
248 TestPreload 150.88
250 TestScheduledStopUnix 120.53
251 TestSkaffold 140.47
254 TestRunningBinaryUpgrade 191.32
256 TestKubernetesUpgrade 207.05
264 TestPause/serial/Start 91.32
279 TestNoKubernetes/serial/StartNoK8sWithVersion 0.06
280 TestNoKubernetes/serial/StartWithK8s 74.74
281 TestPause/serial/SecondStartNoReconfiguration 71.51
282 TestNoKubernetes/serial/StartWithStopK8s 8.1
283 TestNoKubernetes/serial/Start 28.69
284 TestPause/serial/Pause 0.62
285 TestPause/serial/VerifyStatus 0.25
286 TestPause/serial/Unpause 0.6
287 TestPause/serial/PauseAgain 0.92
288 TestPause/serial/DeletePaused 1.09
289 TestPause/serial/VerifyDeletedResources 0.56
290 TestStoppedBinaryUpgrade/Setup 0.37
291 TestStoppedBinaryUpgrade/Upgrade 165.69
292 TestNoKubernetes/serial/VerifyK8sNotRunning 0.22
293 TestNoKubernetes/serial/ProfileList 14.97
294 TestNoKubernetes/serial/Stop 2.31
295 TestNoKubernetes/serial/StartNoArgs 56.1
296 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.23
297 TestStoppedBinaryUpgrade/MinikubeLogs 1.41
298 TestNetworkPlugins/group/auto/Start 80.18
299 TestNetworkPlugins/group/kindnet/Start 105.28
300 TestNetworkPlugins/group/calico/Start 115.43
301 TestNetworkPlugins/group/auto/KubeletFlags 0.23
302 TestNetworkPlugins/group/auto/NetCatPod 11.22
303 TestNetworkPlugins/group/auto/DNS 0.16
304 TestNetworkPlugins/group/auto/Localhost 0.14
305 TestNetworkPlugins/group/auto/HairPin 0.14
306 TestNetworkPlugins/group/custom-flannel/Start 76.98
307 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
308 TestNetworkPlugins/group/kindnet/KubeletFlags 0.23
309 TestNetworkPlugins/group/kindnet/NetCatPod 10.25
310 TestNetworkPlugins/group/kindnet/DNS 0.19
311 TestNetworkPlugins/group/kindnet/Localhost 0.13
312 TestNetworkPlugins/group/kindnet/HairPin 0.12
313 TestNetworkPlugins/group/false/Start 79.65
314 TestNetworkPlugins/group/calico/ControllerPod 6.01
315 TestNetworkPlugins/group/calico/KubeletFlags 0.21
316 TestNetworkPlugins/group/calico/NetCatPod 11.24
317 TestNetworkPlugins/group/calico/DNS 0.2
318 TestNetworkPlugins/group/calico/Localhost 0.23
319 TestNetworkPlugins/group/calico/HairPin 0.21
320 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.29
321 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.33
322 TestNetworkPlugins/group/enable-default-cni/Start 82.69
323 TestNetworkPlugins/group/custom-flannel/DNS 0.22
324 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
325 TestNetworkPlugins/group/custom-flannel/HairPin 0.19
326 TestNetworkPlugins/group/flannel/Start 99.64
327 TestNetworkPlugins/group/bridge/Start 108.59
328 TestNetworkPlugins/group/false/KubeletFlags 0.22
329 TestNetworkPlugins/group/false/NetCatPod 11.28
330 TestNetworkPlugins/group/false/DNS 0.17
331 TestNetworkPlugins/group/false/Localhost 0.14
332 TestNetworkPlugins/group/false/HairPin 0.13
333 TestNetworkPlugins/group/kubenet/Start 127.88
334 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.22
335 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.24
336 TestNetworkPlugins/group/enable-default-cni/DNS 0.24
337 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
338 TestNetworkPlugins/group/enable-default-cni/HairPin 0.23
340 TestStartStop/group/old-k8s-version/serial/FirstStart 167.45
341 TestNetworkPlugins/group/flannel/ControllerPod 6.01
342 TestNetworkPlugins/group/flannel/KubeletFlags 0.35
343 TestNetworkPlugins/group/flannel/NetCatPod 13.64
344 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
345 TestNetworkPlugins/group/bridge/NetCatPod 10.54
346 TestNetworkPlugins/group/flannel/DNS 0.22
347 TestNetworkPlugins/group/flannel/Localhost 0.14
348 TestNetworkPlugins/group/flannel/HairPin 0.15
349 TestNetworkPlugins/group/bridge/DNS 0.2
350 TestNetworkPlugins/group/bridge/Localhost 0.14
351 TestNetworkPlugins/group/bridge/HairPin 0.14
353 TestStartStop/group/no-preload/serial/FirstStart 86.49
355 TestStartStop/group/embed-certs/serial/FirstStart 103.53
356 TestNetworkPlugins/group/kubenet/KubeletFlags 0.23
357 TestNetworkPlugins/group/kubenet/NetCatPod 11.38
358 TestNetworkPlugins/group/kubenet/DNS 0.2
359 TestNetworkPlugins/group/kubenet/Localhost 0.15
360 TestNetworkPlugins/group/kubenet/HairPin 0.15
362 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 74.17
363 TestStartStop/group/no-preload/serial/DeployApp 9.32
364 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.94
365 TestStartStop/group/no-preload/serial/Stop 14.34
366 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
367 TestStartStop/group/no-preload/serial/SecondStart 303.34
368 TestStartStop/group/embed-certs/serial/DeployApp 9.33
369 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.04
370 TestStartStop/group/embed-certs/serial/Stop 13.38
371 TestStartStop/group/old-k8s-version/serial/DeployApp 9.52
372 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
373 TestStartStop/group/embed-certs/serial/SecondStart 298.64
374 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.28
375 TestStartStop/group/old-k8s-version/serial/Stop 12.67
376 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.35
377 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
378 TestStartStop/group/old-k8s-version/serial/SecondStart 415.61
379 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.98
380 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.35
381 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.3
382 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 340.41
383 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
384 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
385 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.22
386 TestStartStop/group/no-preload/serial/Pause 2.5
388 TestStartStop/group/newest-cni/serial/FirstStart 68.26
389 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
390 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
391 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.21
392 TestStartStop/group/embed-certs/serial/Pause 2.57
393 TestStartStop/group/newest-cni/serial/DeployApp 0
394 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.01
395 TestStartStop/group/newest-cni/serial/Stop 7.69
396 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
397 TestStartStop/group/newest-cni/serial/SecondStart 39.85
398 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
399 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
400 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.2
401 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.35
402 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
403 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
404 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.21
405 TestStartStop/group/newest-cni/serial/Pause 2.11
406 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
407 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
408 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.21
409 TestStartStop/group/old-k8s-version/serial/Pause 2.25
x
+
TestDownloadOnly/v1.20.0/json-events (8.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-954135 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-954135 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=kvm2 : (8.260376374s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (8.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-954135
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-954135: exit status 85 (59.922251ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-954135 | jenkins | v1.33.1 | 01 Jul 24 12:04 UTC |          |
	|         | -p download-only-954135        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/01 12:04:26
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0701 12:04:26.625121  637866 out.go:291] Setting OutFile to fd 1 ...
	I0701 12:04:26.625427  637866 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 12:04:26.625439  637866 out.go:304] Setting ErrFile to fd 2...
	I0701 12:04:26.625444  637866 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 12:04:26.625674  637866 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19166-630650/.minikube/bin
	W0701 12:04:26.625886  637866 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19166-630650/.minikube/config/config.json: open /home/jenkins/minikube-integration/19166-630650/.minikube/config/config.json: no such file or directory
	I0701 12:04:26.626553  637866 out.go:298] Setting JSON to true
	I0701 12:04:26.627550  637866 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6405,"bootTime":1719829062,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0701 12:04:26.627611  637866 start.go:139] virtualization: kvm guest
	I0701 12:04:26.630118  637866 out.go:97] [download-only-954135] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0701 12:04:26.630243  637866 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/19166-630650/.minikube/cache/preloaded-tarball: no such file or directory
	I0701 12:04:26.630304  637866 notify.go:220] Checking for updates...
	I0701 12:04:26.631968  637866 out.go:169] MINIKUBE_LOCATION=19166
	I0701 12:04:26.633571  637866 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 12:04:26.635003  637866 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19166-630650/kubeconfig
	I0701 12:04:26.636579  637866 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19166-630650/.minikube
	I0701 12:04:26.638088  637866 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0701 12:04:26.640585  637866 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0701 12:04:26.640845  637866 driver.go:392] Setting default libvirt URI to qemu:///system
	I0701 12:04:26.677245  637866 out.go:97] Using the kvm2 driver based on user configuration
	I0701 12:04:26.677295  637866 start.go:297] selected driver: kvm2
	I0701 12:04:26.677301  637866 start.go:901] validating driver "kvm2" against <nil>
	I0701 12:04:26.677702  637866 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:04:26.677792  637866 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19166-630650/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0701 12:04:26.693659  637866 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0701 12:04:26.693720  637866 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0701 12:04:26.694256  637866 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0701 12:04:26.694438  637866 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0701 12:04:26.694510  637866 cni.go:84] Creating CNI manager for ""
	I0701 12:04:26.694530  637866 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0701 12:04:26.694604  637866 start.go:340] cluster config:
	{Name:download-only-954135 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-954135 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 12:04:26.694817  637866 iso.go:125] acquiring lock: {Name:mk5c70910f61bc270c83609c48670eaf9d7e0602 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:04:26.697084  637866 out.go:97] Downloading VM boot image ...
	I0701 12:04:26.697131  637866 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19166-630650/.minikube/cache/iso/amd64/minikube-v1.33.1-1719412936-19142-amd64.iso
	I0701 12:04:29.688587  637866 out.go:97] Starting "download-only-954135" primary control-plane node in "download-only-954135" cluster
	I0701 12:04:29.688620  637866 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0701 12:04:29.716339  637866 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0701 12:04:29.716389  637866 cache.go:56] Caching tarball of preloaded images
	I0701 12:04:29.716563  637866 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0701 12:04:29.718687  637866 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0701 12:04:29.718708  637866 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0701 12:04:29.746264  637866 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /home/jenkins/minikube-integration/19166-630650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-954135 host does not exist
	  To start a cluster, run: "minikube start -p download-only-954135"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-954135
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/json-events (3.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-570781 --force --alsologtostderr --kubernetes-version=v1.30.2 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-570781 --force --alsologtostderr --kubernetes-version=v1.30.2 --container-runtime=docker --driver=kvm2 : (3.292579844s)
--- PASS: TestDownloadOnly/v1.30.2/json-events (3.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/preload-exists
--- PASS: TestDownloadOnly/v1.30.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-570781
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-570781: exit status 85 (62.678842ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-954135 | jenkins | v1.33.1 | 01 Jul 24 12:04 UTC |                     |
	|         | -p download-only-954135        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 01 Jul 24 12:04 UTC | 01 Jul 24 12:04 UTC |
	| delete  | -p download-only-954135        | download-only-954135 | jenkins | v1.33.1 | 01 Jul 24 12:04 UTC | 01 Jul 24 12:04 UTC |
	| start   | -o=json --download-only        | download-only-570781 | jenkins | v1.33.1 | 01 Jul 24 12:04 UTC |                     |
	|         | -p download-only-570781        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/01 12:04:35
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0701 12:04:35.206712  638059 out.go:291] Setting OutFile to fd 1 ...
	I0701 12:04:35.206821  638059 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 12:04:35.206825  638059 out.go:304] Setting ErrFile to fd 2...
	I0701 12:04:35.206830  638059 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 12:04:35.207012  638059 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19166-630650/.minikube/bin
	I0701 12:04:35.207563  638059 out.go:298] Setting JSON to true
	I0701 12:04:35.208477  638059 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6413,"bootTime":1719829062,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0701 12:04:35.208538  638059 start.go:139] virtualization: kvm guest
	I0701 12:04:35.210919  638059 out.go:97] [download-only-570781] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0701 12:04:35.211074  638059 notify.go:220] Checking for updates...
	I0701 12:04:35.212634  638059 out.go:169] MINIKUBE_LOCATION=19166
	I0701 12:04:35.214285  638059 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 12:04:35.215673  638059 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19166-630650/kubeconfig
	I0701 12:04:35.216987  638059 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19166-630650/.minikube
	I0701 12:04:35.218314  638059 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-570781 host does not exist
	  To start a cluster, run: "minikube start -p download-only-570781"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.2/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.2/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-570781
--- PASS: TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-300404 --alsologtostderr --binary-mirror http://127.0.0.1:34289 --driver=kvm2 
helpers_test.go:175: Cleaning up "binary-mirror-300404" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-300404
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestOffline (74.73s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-431777 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-431777 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 : (1m13.71687334s)
helpers_test.go:175: Cleaning up "offline-docker-431777" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-431777
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-431777: (1.011737836s)
--- PASS: TestOffline (74.73s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-877411
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-877411: exit status 85 (52.184053ms)

                                                
                                                
-- stdout --
	* Profile "addons-877411" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-877411"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-877411
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-877411: exit status 85 (53.42908ms)

                                                
                                                
-- stdout --
	* Profile "addons-877411" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-877411"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (223.17s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-877411 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-877411 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m43.172612343s)
--- PASS: TestAddons/Setup (223.17s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 20.182205ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-f7wd2" [e822e0a7-f1a5-4ec0-b4bd-eed3ce208ef8] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.02528518s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-r6cgf" [54998893-5442-40da-9379-1f66085b3dd9] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.006605021s
addons_test.go:342: (dbg) Run:  kubectl --context addons-877411 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-877411 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-877411 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.718318851s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-877411 ip
2024/07/01 12:08:36 [DEBUG] GET http://192.168.39.41:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-877411 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.42s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.38s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-877411 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-877411 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-877411 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [3c858677-ffe4-4220-9061-45998e7de95c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [3c858677-ffe4-4220-9061-45998e7de95c] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.053802743s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-877411 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-877411 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-877411 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.41
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-877411 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-877411 addons disable ingress-dns --alsologtostderr -v=1: (1.172399546s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-877411 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-877411 addons disable ingress --alsologtostderr -v=1: (7.811057972s)
--- PASS: TestAddons/parallel/Ingress (20.38s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.11s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-8smhv" [f2af6d9a-ba31-4ea5-83ab-e80bd1c7a18a] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004384657s
addons_test.go:843: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-877411
addons_test.go:843: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-877411: (6.10145755s)
--- PASS: TestAddons/parallel/InspektorGadget (12.11s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.62s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.472394ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-8h6hj" [1b82fdd3-4b63-4cd5-a8fa-ebeaaf2f51a9] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004238652s
addons_test.go:417: (dbg) Run:  kubectl --context addons-877411 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-877411 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.62s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (12.42s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 20.33407ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-94bcv" [2462d8d3-b8d9-4063-a9cf-b19b6894fb6b] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.009914635s
addons_test.go:475: (dbg) Run:  kubectl --context addons-877411 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-877411 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.858069107s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-877411 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (12.42s)

                                                
                                    
x
+
TestAddons/parallel/CSI (56.98s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:563: csi-hostpath-driver pods stabilized in 6.41255ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-877411 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877411 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877411 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877411 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877411 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877411 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877411 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877411 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877411 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877411 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877411 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877411 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877411 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877411 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877411 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:576: (dbg) Run:  kubectl --context addons-877411 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [9e87b46c-e160-4025-8c06-d1d83358e762] Pending
helpers_test.go:344: "task-pv-pod" [9e87b46c-e160-4025-8c06-d1d83358e762] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [9e87b46c-e160-4025-8c06-d1d83358e762] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.00431424s
addons_test.go:586: (dbg) Run:  kubectl --context addons-877411 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-877411 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-877411 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-877411 delete pod task-pv-pod
addons_test.go:596: (dbg) Done: kubectl --context addons-877411 delete pod task-pv-pod: (1.291629718s)
addons_test.go:602: (dbg) Run:  kubectl --context addons-877411 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-877411 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877411 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877411 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877411 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877411 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877411 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877411 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877411 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877411 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877411 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877411 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877411 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877411 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877411 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877411 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877411 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877411 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-877411 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [92eadf31-4576-456f-84bd-05f63779ef29] Pending
helpers_test.go:344: "task-pv-pod-restore" [92eadf31-4576-456f-84bd-05f63779ef29] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [92eadf31-4576-456f-84bd-05f63779ef29] Running
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003338386s
addons_test.go:628: (dbg) Run:  kubectl --context addons-877411 delete pod task-pv-pod-restore
addons_test.go:628: (dbg) Done: kubectl --context addons-877411 delete pod task-pv-pod-restore: (1.047718316s)
addons_test.go:632: (dbg) Run:  kubectl --context addons-877411 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-877411 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-linux-amd64 -p addons-877411 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:640: (dbg) Done: out/minikube-linux-amd64 -p addons-877411 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.671139139s)
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-877411 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (56.98s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.89s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-877411 --alsologtostderr -v=1
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-c2558" [2b684513-bff1-485c-9610-44cdec6c4eef] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-c2558" [2b684513-bff1-485c-9610-44cdec6c4eef] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.003648812s
--- PASS: TestAddons/parallel/Headlamp (12.89s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.45s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-xcd5x" [1ce7242e-e8c0-46c8-a448-1e9e3d2325d3] Running
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003204842s
addons_test.go:862: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-877411
--- PASS: TestAddons/parallel/CloudSpanner (6.45s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.56s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-877411 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-877411 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877411 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877411 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877411 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877411 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877411 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877411 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877411 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [ab33cad7-e512-4251-b81c-f4b72305a8f5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [ab33cad7-e512-4251-b81c-f4b72305a8f5] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [ab33cad7-e512-4251-b81c-f4b72305a8f5] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.006189202s
addons_test.go:992: (dbg) Run:  kubectl --context addons-877411 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-linux-amd64 -p addons-877411 ssh "cat /opt/local-path-provisioner/pvc-e4c27e4a-cdf1-439d-8bb5-9b8b13f9a482_default_test-pvc/file1"
addons_test.go:1013: (dbg) Run:  kubectl --context addons-877411 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-877411 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-linux-amd64 -p addons-877411 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1021: (dbg) Done: out/minikube-linux-amd64 -p addons-877411 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.594935527s)
--- PASS: TestAddons/parallel/LocalPath (54.56s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.41s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-jkwj4" [b41f0ab3-d0f7-4a78-894d-f9330cf639eb] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005505838s
addons_test.go:1056: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-877411
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.41s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-lbsjv" [9cdf04db-12f6-463f-918e-4440b6f7f3d5] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.005948231s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (41.19s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:905: volcano-controller stabilized in 2.39491ms
addons_test.go:897: volcano-admission stabilized in 3.256348ms
addons_test.go:889: volcano-scheduler stabilized in 4.328165ms
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-765f888978-hnf59" [5ee1f107-d14d-4be2-bd99-92a9e563f213] Running
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: app=volcano-scheduler healthy within 5.004469192s
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-7b497cf95b-jwzx5" [13948019-cb78-4c63-8e29-98a797509598] Running
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: app=volcano-admission healthy within 5.006960844s
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controller-86c5446455-rlqdj" [e293e6e9-0f44-4f7a-81de-15dd0f4a4b92] Running
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: app=volcano-controller healthy within 5.004031227s
addons_test.go:924: (dbg) Run:  kubectl --context addons-877411 delete -n volcano-system job volcano-admission-init
addons_test.go:930: (dbg) Run:  kubectl --context addons-877411 create -f testdata/vcjob.yaml
addons_test.go:938: (dbg) Run:  kubectl --context addons-877411 get vcjob -n my-volcano
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [34d9c553-b931-40ed-bed8-a2298ec7390a] Pending
helpers_test.go:344: "test-job-nginx-0" [34d9c553-b931-40ed-bed8-a2298ec7390a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [34d9c553-b931-40ed-bed8-a2298ec7390a] Running
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: volcano.sh/job-name=test-job healthy within 16.009231237s
addons_test.go:960: (dbg) Run:  out/minikube-linux-amd64 -p addons-877411 addons disable volcano --alsologtostderr -v=1
addons_test.go:960: (dbg) Done: out/minikube-linux-amd64 -p addons-877411 addons disable volcano --alsologtostderr -v=1: (9.772566484s)
--- PASS: TestAddons/parallel/Volcano (41.19s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-877411 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-877411 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.57s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-877411
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-877411: (13.294895648s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-877411
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-877411
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-877411
--- PASS: TestAddons/StoppedEnableDisable (13.57s)

                                                
                                    
x
+
TestCertOptions (90.91s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-771628 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 
E0701 12:58:16.299416  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/skaffold-459115/client.crt: no such file or directory
E0701 12:58:22.863634  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/addons-877411/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-771628 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (1m28.621261987s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-771628 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-771628 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-771628 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-771628" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-771628
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-771628: (1.659247605s)
--- PASS: TestCertOptions (90.91s)

                                                
                                    
x
+
TestCertExpiration (353.3s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-080681 --memory=2048 --cert-expiration=3m --driver=kvm2 
E0701 12:56:54.375302  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/skaffold-459115/client.crt: no such file or directory
E0701 12:56:54.380643  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/skaffold-459115/client.crt: no such file or directory
E0701 12:56:54.390955  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/skaffold-459115/client.crt: no such file or directory
E0701 12:56:54.411253  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/skaffold-459115/client.crt: no such file or directory
E0701 12:56:54.451631  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/skaffold-459115/client.crt: no such file or directory
E0701 12:56:54.532033  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/skaffold-459115/client.crt: no such file or directory
E0701 12:56:54.692557  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/skaffold-459115/client.crt: no such file or directory
E0701 12:56:55.013292  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/skaffold-459115/client.crt: no such file or directory
E0701 12:56:55.653888  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/skaffold-459115/client.crt: no such file or directory
E0701 12:56:56.934155  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/skaffold-459115/client.crt: no such file or directory
E0701 12:56:59.495661  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/skaffold-459115/client.crt: no such file or directory
E0701 12:57:04.616555  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/skaffold-459115/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-080681 --memory=2048 --cert-expiration=3m --driver=kvm2 : (1m56.900385616s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-080681 --memory=2048 --cert-expiration=8760h --driver=kvm2 
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-080681 --memory=2048 --cert-expiration=8760h --driver=kvm2 : (55.210229582s)
helpers_test.go:175: Cleaning up "cert-expiration-080681" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-080681
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-080681: (1.190955638s)
--- PASS: TestCertExpiration (353.30s)

                                                
                                    
x
+
TestDockerFlags (75.38s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-399433 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-399433 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (1m13.915746402s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-399433 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-399433 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-399433" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-399433
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-399433: (1.010091557s)
--- PASS: TestDockerFlags (75.38s)

                                                
                                    
x
+
TestForceSystemdFlag (68.18s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-110838 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-110838 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (1m7.104908681s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-110838 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-110838" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-110838
--- PASS: TestForceSystemdFlag (68.18s)

                                                
                                    
x
+
TestForceSystemdEnv (102.93s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-376343 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-376343 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 : (1m41.491818218s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-376343 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-376343" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-376343
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-376343: (1.180719424s)
--- PASS: TestForceSystemdEnv (102.93s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5.02s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (5.02s)

                                                
                                    
x
+
TestErrorSpam/setup (46.7s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-928095 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-928095 --driver=kvm2 
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-928095 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-928095 --driver=kvm2 : (46.698752385s)
--- PASS: TestErrorSpam/setup (46.70s)

                                                
                                    
x
+
TestErrorSpam/start (0.37s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-928095 --log_dir /tmp/nospam-928095 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-928095 --log_dir /tmp/nospam-928095 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-928095 --log_dir /tmp/nospam-928095 start --dry-run
--- PASS: TestErrorSpam/start (0.37s)

                                                
                                    
x
+
TestErrorSpam/status (0.72s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-928095 --log_dir /tmp/nospam-928095 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-928095 --log_dir /tmp/nospam-928095 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-928095 --log_dir /tmp/nospam-928095 status
--- PASS: TestErrorSpam/status (0.72s)

                                                
                                    
x
+
TestErrorSpam/pause (1.21s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-928095 --log_dir /tmp/nospam-928095 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-928095 --log_dir /tmp/nospam-928095 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-928095 --log_dir /tmp/nospam-928095 pause
--- PASS: TestErrorSpam/pause (1.21s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.25s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-928095 --log_dir /tmp/nospam-928095 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-928095 --log_dir /tmp/nospam-928095 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-928095 --log_dir /tmp/nospam-928095 unpause
--- PASS: TestErrorSpam/unpause (1.25s)

                                                
                                    
x
+
TestErrorSpam/stop (14.79s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-928095 --log_dir /tmp/nospam-928095 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-928095 --log_dir /tmp/nospam-928095 stop: (12.488293892s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-928095 --log_dir /tmp/nospam-928095 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-928095 --log_dir /tmp/nospam-928095 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-928095 --log_dir /tmp/nospam-928095 stop: (1.465271196s)
--- PASS: TestErrorSpam/stop (14.79s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/test/nested/copy/637854/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (75.11s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-377045 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-377045 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 : (1m15.114100998s)
--- PASS: TestFunctional/serial/StartWithProxy (75.11s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (40.69s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-377045 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-377045 --alsologtostderr -v=8: (40.684563953s)
functional_test.go:659: soft start took 40.685286473s for "functional-377045" cluster.
--- PASS: TestFunctional/serial/SoftStart (40.69s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-377045 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-377045 /tmp/TestFunctionalserialCacheCmdcacheadd_local3228080347/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 cache add minikube-local-cache-test:functional-377045
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 cache delete minikube-local-cache-test:functional-377045
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-377045
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-377045 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (205.63405ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 kubectl -- --context functional-377045 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-377045 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.81s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-377045 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0701 12:13:22.863601  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/addons-877411/client.crt: no such file or directory
E0701 12:13:22.869617  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/addons-877411/client.crt: no such file or directory
E0701 12:13:22.879931  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/addons-877411/client.crt: no such file or directory
E0701 12:13:22.900206  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/addons-877411/client.crt: no such file or directory
E0701 12:13:22.940551  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/addons-877411/client.crt: no such file or directory
E0701 12:13:23.020953  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/addons-877411/client.crt: no such file or directory
E0701 12:13:23.181372  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/addons-877411/client.crt: no such file or directory
E0701 12:13:23.501989  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/addons-877411/client.crt: no such file or directory
E0701 12:13:24.142999  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/addons-877411/client.crt: no such file or directory
E0701 12:13:25.424165  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/addons-877411/client.crt: no such file or directory
E0701 12:13:27.985011  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/addons-877411/client.crt: no such file or directory
E0701 12:13:33.105647  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/addons-877411/client.crt: no such file or directory
E0701 12:13:43.345887  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/addons-877411/client.crt: no such file or directory
E0701 12:14:03.826471  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/addons-877411/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-377045 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.814461106s)
functional_test.go:757: restart took 42.814609517s for "functional-377045" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (42.81s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-377045 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.97s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 logs
--- PASS: TestFunctional/serial/LogsCmd (0.97s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.01s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 logs --file /tmp/TestFunctionalserialLogsFileCmd3437144543/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-377045 logs --file /tmp/TestFunctionalserialLogsFileCmd3437144543/001/logs.txt: (1.011368386s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.01s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.41s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-377045 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-377045
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-377045: exit status 115 (284.19914ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.77:32740 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-377045 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.41s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-377045 config get cpus: exit status 14 (51.883844ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-377045 config get cpus: exit status 14 (54.880151ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-377045 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-377045 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 645566: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.42s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-377045 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-377045 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (160.470748ms)

                                                
                                                
-- stdout --
	* [functional-377045] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19166
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19166-630650/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19166-630650/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 12:14:50.084061  646253 out.go:291] Setting OutFile to fd 1 ...
	I0701 12:14:50.084195  646253 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 12:14:50.084207  646253 out.go:304] Setting ErrFile to fd 2...
	I0701 12:14:50.084213  646253 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 12:14:50.084532  646253 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19166-630650/.minikube/bin
	I0701 12:14:50.085237  646253 out.go:298] Setting JSON to false
	I0701 12:14:50.086822  646253 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":7028,"bootTime":1719829062,"procs":269,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0701 12:14:50.086917  646253 start.go:139] virtualization: kvm guest
	I0701 12:14:50.089317  646253 out.go:177] * [functional-377045] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0701 12:14:50.090953  646253 out.go:177]   - MINIKUBE_LOCATION=19166
	I0701 12:14:50.090994  646253 notify.go:220] Checking for updates...
	I0701 12:14:50.094083  646253 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 12:14:50.095588  646253 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19166-630650/kubeconfig
	I0701 12:14:50.096882  646253 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19166-630650/.minikube
	I0701 12:14:50.098438  646253 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0701 12:14:50.100581  646253 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 12:14:50.102630  646253 config.go:182] Loaded profile config "functional-377045": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:14:50.103273  646253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:14:50.103321  646253 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:14:50.125115  646253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36291
	I0701 12:14:50.125631  646253 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:14:50.126264  646253 main.go:141] libmachine: Using API Version  1
	I0701 12:14:50.126310  646253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:14:50.126661  646253 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:14:50.126862  646253 main.go:141] libmachine: (functional-377045) Calling .DriverName
	I0701 12:14:50.127123  646253 driver.go:392] Setting default libvirt URI to qemu:///system
	I0701 12:14:50.127556  646253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:14:50.127605  646253 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:14:50.144349  646253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46793
	I0701 12:14:50.144817  646253 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:14:50.145347  646253 main.go:141] libmachine: Using API Version  1
	I0701 12:14:50.145371  646253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:14:50.145745  646253 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:14:50.145939  646253 main.go:141] libmachine: (functional-377045) Calling .DriverName
	I0701 12:14:50.179525  646253 out.go:177] * Using the kvm2 driver based on existing profile
	I0701 12:14:50.181012  646253 start.go:297] selected driver: kvm2
	I0701 12:14:50.181039  646253 start.go:901] validating driver "kvm2" against &{Name:functional-377045 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:functional-377045 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.77 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpira
tion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 12:14:50.181205  646253 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 12:14:50.183389  646253 out.go:177] 
	W0701 12:14:50.184589  646253 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0701 12:14:50.185751  646253 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-377045 --dry-run --alsologtostderr -v=1 --driver=kvm2 
--- PASS: TestFunctional/parallel/DryRun (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-377045 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-377045 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (166.00181ms)

                                                
                                                
-- stdout --
	* [functional-377045] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19166
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19166-630650/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19166-630650/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 12:14:49.966543  646217 out.go:291] Setting OutFile to fd 1 ...
	I0701 12:14:49.966699  646217 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 12:14:49.966713  646217 out.go:304] Setting ErrFile to fd 2...
	I0701 12:14:49.966719  646217 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 12:14:49.967154  646217 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19166-630650/.minikube/bin
	I0701 12:14:49.967914  646217 out.go:298] Setting JSON to false
	I0701 12:14:49.969441  646217 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":7028,"bootTime":1719829062,"procs":267,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0701 12:14:49.969530  646217 start.go:139] virtualization: kvm guest
	I0701 12:14:49.972108  646217 out.go:177] * [functional-377045] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0701 12:14:49.973525  646217 notify.go:220] Checking for updates...
	I0701 12:14:49.975134  646217 out.go:177]   - MINIKUBE_LOCATION=19166
	I0701 12:14:49.976715  646217 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 12:14:49.978161  646217 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19166-630650/kubeconfig
	I0701 12:14:49.979632  646217 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19166-630650/.minikube
	I0701 12:14:49.981161  646217 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0701 12:14:49.982641  646217 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 12:14:49.984377  646217 config.go:182] Loaded profile config "functional-377045": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:14:49.984962  646217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:14:49.985005  646217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:14:50.005155  646217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35413
	I0701 12:14:50.005553  646217 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:14:50.006221  646217 main.go:141] libmachine: Using API Version  1
	I0701 12:14:50.006243  646217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:14:50.006630  646217 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:14:50.006828  646217 main.go:141] libmachine: (functional-377045) Calling .DriverName
	I0701 12:14:50.007005  646217 driver.go:392] Setting default libvirt URI to qemu:///system
	I0701 12:14:50.007414  646217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:14:50.007464  646217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:14:50.025225  646217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37875
	I0701 12:14:50.025803  646217 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:14:50.026496  646217 main.go:141] libmachine: Using API Version  1
	I0701 12:14:50.026527  646217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:14:50.026849  646217 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:14:50.027013  646217 main.go:141] libmachine: (functional-377045) Calling .DriverName
	I0701 12:14:50.066130  646217 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0701 12:14:50.067486  646217 start.go:297] selected driver: kvm2
	I0701 12:14:50.067505  646217 start.go:901] validating driver "kvm2" against &{Name:functional-377045 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:functional-377045 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.77 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpira
tion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 12:14:50.067650  646217 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 12:14:50.070460  646217 out.go:177] 
	W0701 12:14:50.071830  646217 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0701 12:14:50.073256  646217 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (26.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-377045 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-377045 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-q8t8n" [1a19d4b9-3843-4339-bf27-e7c02a04453b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-q8t8n" [1a19d4b9-3843-4339-bf27-e7c02a04453b] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 26.109014458s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.77:30736
functional_test.go:1671: http://192.168.39.77:30736: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-q8t8n

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.77:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.77:30736
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (26.64s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (50.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [7824f8c9-848c-455a-b7eb-3c3ec0b29090] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004592422s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-377045 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-377045 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-377045 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-377045 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-377045 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [bdd3b69e-7991-4778-bf66-22967c705c53] Pending
helpers_test.go:344: "sp-pod" [bdd3b69e-7991-4778-bf66-22967c705c53] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [bdd3b69e-7991-4778-bf66-22967c705c53] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 26.004429129s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-377045 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-377045 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-377045 delete -f testdata/storage-provisioner/pod.yaml: (1.050303094s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-377045 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d13a9741-326c-469a-8484-41ef223ac7a8] Pending
helpers_test.go:344: "sp-pod" [d13a9741-326c-469a-8484-41ef223ac7a8] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d13a9741-326c-469a-8484-41ef223ac7a8] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.004056931s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-377045 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (50.52s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 ssh -n functional-377045 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 cp functional-377045:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2424611386/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 ssh -n functional-377045 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 ssh -n functional-377045 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (30.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-377045 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-r8jv9" [d2d8db63-5575-4316-8fbb-7a40e7001f23] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-r8jv9" [d2d8db63-5575-4316-8fbb-7a40e7001f23] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 23.014581056s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-377045 exec mysql-64454c8b5c-r8jv9 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-377045 exec mysql-64454c8b5c-r8jv9 -- mysql -ppassword -e "show databases;": exit status 1 (391.261632ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-377045 exec mysql-64454c8b5c-r8jv9 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-377045 exec mysql-64454c8b5c-r8jv9 -- mysql -ppassword -e "show databases;": exit status 1 (290.222354ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-377045 exec mysql-64454c8b5c-r8jv9 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-377045 exec mysql-64454c8b5c-r8jv9 -- mysql -ppassword -e "show databases;": exit status 1 (201.142743ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-377045 exec mysql-64454c8b5c-r8jv9 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (30.76s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/637854/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 ssh "sudo cat /etc/test/nested/copy/637854/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/637854.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 ssh "sudo cat /etc/ssl/certs/637854.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/637854.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 ssh "sudo cat /usr/share/ca-certificates/637854.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/6378542.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 ssh "sudo cat /etc/ssl/certs/6378542.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/6378542.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 ssh "sudo cat /usr/share/ca-certificates/6378542.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-377045 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-377045 ssh "sudo systemctl is-active crio": exit status 1 (227.143651ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-377045 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.2
registry.k8s.io/kube-proxy:v1.30.2
registry.k8s.io/kube-controller-manager:v1.30.2
registry.k8s.io/kube-apiserver:v1.30.2
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-377045
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-377045
docker.io/kubernetesui/metrics-scraper:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-377045 image ls --format short --alsologtostderr:
I0701 12:14:51.083675  646453 out.go:291] Setting OutFile to fd 1 ...
I0701 12:14:51.083787  646453 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0701 12:14:51.083796  646453 out.go:304] Setting ErrFile to fd 2...
I0701 12:14:51.083802  646453 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0701 12:14:51.084005  646453 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19166-630650/.minikube/bin
I0701 12:14:51.084636  646453 config.go:182] Loaded profile config "functional-377045": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0701 12:14:51.084776  646453 config.go:182] Loaded profile config "functional-377045": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0701 12:14:51.085155  646453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0701 12:14:51.085198  646453 main.go:141] libmachine: Launching plugin server for driver kvm2
I0701 12:14:51.102487  646453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34069
I0701 12:14:51.103136  646453 main.go:141] libmachine: () Calling .GetVersion
I0701 12:14:51.103836  646453 main.go:141] libmachine: Using API Version  1
I0701 12:14:51.103862  646453 main.go:141] libmachine: () Calling .SetConfigRaw
I0701 12:14:51.104248  646453 main.go:141] libmachine: () Calling .GetMachineName
I0701 12:14:51.104509  646453 main.go:141] libmachine: (functional-377045) Calling .GetState
I0701 12:14:51.106502  646453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0701 12:14:51.106549  646453 main.go:141] libmachine: Launching plugin server for driver kvm2
I0701 12:14:51.123029  646453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45801
I0701 12:14:51.123459  646453 main.go:141] libmachine: () Calling .GetVersion
I0701 12:14:51.123929  646453 main.go:141] libmachine: Using API Version  1
I0701 12:14:51.123948  646453 main.go:141] libmachine: () Calling .SetConfigRaw
I0701 12:14:51.124379  646453 main.go:141] libmachine: () Calling .GetMachineName
I0701 12:14:51.124619  646453 main.go:141] libmachine: (functional-377045) Calling .DriverName
I0701 12:14:51.124892  646453 ssh_runner.go:195] Run: systemctl --version
I0701 12:14:51.124915  646453 main.go:141] libmachine: (functional-377045) Calling .GetSSHHostname
I0701 12:14:51.127653  646453 main.go:141] libmachine: (functional-377045) DBG | domain functional-377045 has defined MAC address 52:54:00:7a:6d:18 in network mk-functional-377045
I0701 12:14:51.127980  646453 main.go:141] libmachine: (functional-377045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:6d:18", ip: ""} in network mk-functional-377045: {Iface:virbr1 ExpiryTime:2024-07-01 13:11:34 +0000 UTC Type:0 Mac:52:54:00:7a:6d:18 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:functional-377045 Clientid:01:52:54:00:7a:6d:18}
I0701 12:14:51.128069  646453 main.go:141] libmachine: (functional-377045) DBG | domain functional-377045 has defined IP address 192.168.39.77 and MAC address 52:54:00:7a:6d:18 in network mk-functional-377045
I0701 12:14:51.128274  646453 main.go:141] libmachine: (functional-377045) Calling .GetSSHPort
I0701 12:14:51.128440  646453 main.go:141] libmachine: (functional-377045) Calling .GetSSHKeyPath
I0701 12:14:51.128543  646453 main.go:141] libmachine: (functional-377045) Calling .GetSSHUsername
I0701 12:14:51.128634  646453 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/functional-377045/id_rsa Username:docker}
I0701 12:14:51.212759  646453 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0701 12:14:51.236119  646453 main.go:141] libmachine: Making call to close driver server
I0701 12:14:51.236135  646453 main.go:141] libmachine: (functional-377045) Calling .Close
I0701 12:14:51.236475  646453 main.go:141] libmachine: Successfully made call to close driver server
I0701 12:14:51.236489  646453 main.go:141] libmachine: (functional-377045) DBG | Closing plugin on server side
I0701 12:14:51.236505  646453 main.go:141] libmachine: Making call to close connection to plugin binary
I0701 12:14:51.236534  646453 main.go:141] libmachine: Making call to close driver server
I0701 12:14:51.236542  646453 main.go:141] libmachine: (functional-377045) Calling .Close
I0701 12:14:51.236790  646453 main.go:141] libmachine: Successfully made call to close driver server
I0701 12:14:51.236815  646453 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-377045 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/library/minikube-local-cache-test | functional-377045 | 530babb8b88b2 | 30B    |
| registry.k8s.io/kube-proxy                  | v1.30.2           | 53c535741fb44 | 84.7MB |
| registry.k8s.io/etcd                        | 3.5.12-0          | 3861cfcd7c04c | 149MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| gcr.io/google-containers/addon-resizer      | functional-377045 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| docker.io/library/nginx                     | latest            | e0c9858e10ed8 | 188MB  |
| registry.k8s.io/kube-apiserver              | v1.30.2           | 56ce0fd9fb532 | 117MB  |
| registry.k8s.io/kube-controller-manager     | v1.30.2           | e874818b3caac | 111MB  |
| registry.k8s.io/kube-scheduler              | v1.30.2           | 7820c83aa1394 | 62MB   |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-377045 image ls --format table --alsologtostderr:
I0701 12:14:51.586164  646579 out.go:291] Setting OutFile to fd 1 ...
I0701 12:14:51.586277  646579 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0701 12:14:51.586286  646579 out.go:304] Setting ErrFile to fd 2...
I0701 12:14:51.586290  646579 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0701 12:14:51.586494  646579 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19166-630650/.minikube/bin
I0701 12:14:51.587103  646579 config.go:182] Loaded profile config "functional-377045": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0701 12:14:51.587201  646579 config.go:182] Loaded profile config "functional-377045": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0701 12:14:51.587585  646579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0701 12:14:51.587623  646579 main.go:141] libmachine: Launching plugin server for driver kvm2
I0701 12:14:51.602963  646579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34467
I0701 12:14:51.603557  646579 main.go:141] libmachine: () Calling .GetVersion
I0701 12:14:51.604231  646579 main.go:141] libmachine: Using API Version  1
I0701 12:14:51.604288  646579 main.go:141] libmachine: () Calling .SetConfigRaw
I0701 12:14:51.604707  646579 main.go:141] libmachine: () Calling .GetMachineName
I0701 12:14:51.604935  646579 main.go:141] libmachine: (functional-377045) Calling .GetState
I0701 12:14:51.607223  646579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0701 12:14:51.607283  646579 main.go:141] libmachine: Launching plugin server for driver kvm2
I0701 12:14:51.622462  646579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40067
I0701 12:14:51.622925  646579 main.go:141] libmachine: () Calling .GetVersion
I0701 12:14:51.623452  646579 main.go:141] libmachine: Using API Version  1
I0701 12:14:51.623478  646579 main.go:141] libmachine: () Calling .SetConfigRaw
I0701 12:14:51.623832  646579 main.go:141] libmachine: () Calling .GetMachineName
I0701 12:14:51.624015  646579 main.go:141] libmachine: (functional-377045) Calling .DriverName
I0701 12:14:51.624234  646579 ssh_runner.go:195] Run: systemctl --version
I0701 12:14:51.624259  646579 main.go:141] libmachine: (functional-377045) Calling .GetSSHHostname
I0701 12:14:51.627619  646579 main.go:141] libmachine: (functional-377045) DBG | domain functional-377045 has defined MAC address 52:54:00:7a:6d:18 in network mk-functional-377045
I0701 12:14:51.628060  646579 main.go:141] libmachine: (functional-377045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:6d:18", ip: ""} in network mk-functional-377045: {Iface:virbr1 ExpiryTime:2024-07-01 13:11:34 +0000 UTC Type:0 Mac:52:54:00:7a:6d:18 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:functional-377045 Clientid:01:52:54:00:7a:6d:18}
I0701 12:14:51.628089  646579 main.go:141] libmachine: (functional-377045) DBG | domain functional-377045 has defined IP address 192.168.39.77 and MAC address 52:54:00:7a:6d:18 in network mk-functional-377045
I0701 12:14:51.628249  646579 main.go:141] libmachine: (functional-377045) Calling .GetSSHPort
I0701 12:14:51.628444  646579 main.go:141] libmachine: (functional-377045) Calling .GetSSHKeyPath
I0701 12:14:51.628637  646579 main.go:141] libmachine: (functional-377045) Calling .GetSSHUsername
I0701 12:14:51.628785  646579 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/functional-377045/id_rsa Username:docker}
I0701 12:14:51.746145  646579 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0701 12:14:51.781862  646579 main.go:141] libmachine: Making call to close driver server
I0701 12:14:51.781894  646579 main.go:141] libmachine: (functional-377045) Calling .Close
I0701 12:14:51.782207  646579 main.go:141] libmachine: Successfully made call to close driver server
I0701 12:14:51.782241  646579 main.go:141] libmachine: Making call to close connection to plugin binary
I0701 12:14:51.782249  646579 main.go:141] libmachine: Making call to close driver server
I0701 12:14:51.782256  646579 main.go:141] libmachine: (functional-377045) Calling .Close
I0701 12:14:51.782257  646579 main.go:141] libmachine: (functional-377045) DBG | Closing plugin on server side
I0701 12:14:51.782530  646579 main.go:141] libmachine: Successfully made call to close driver server
I0701 12:14:51.782545  646579 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-377045 image ls --format json --alsologtostderr:
[{"id":"530babb8b88b2ebe4c43c65c65c257f3260ae247d191c4ed41ca217efa66372f","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-377045"],"size":"30"},{"id":"e0c9858e10ed8be697dc2809db78c57357ffc82de88c69a3dee5d148354679ef","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"149000000"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.2"],"size":"111000000"},{"id":"53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.2"],"size":"84700000"},{"id":"5107333e08a87b836d48f
f7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-377045"],"size":"32900000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe","repoDig
ests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.2"],"size":"117000000"},{"id":"7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.2"],"size":"62000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-377045 image ls --format json --alsologtostderr:
I0701 12:14:51.372016  646525 out.go:291] Setting OutFile to fd 1 ...
I0701 12:14:51.372585  646525 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0701 12:14:51.372638  646525 out.go:304] Setting ErrFile to fd 2...
I0701 12:14:51.372657  646525 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0701 12:14:51.373070  646525 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19166-630650/.minikube/bin
I0701 12:14:51.374546  646525 config.go:182] Loaded profile config "functional-377045": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0701 12:14:51.374680  646525 config.go:182] Loaded profile config "functional-377045": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0701 12:14:51.375132  646525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0701 12:14:51.375184  646525 main.go:141] libmachine: Launching plugin server for driver kvm2
I0701 12:14:51.391336  646525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33375
I0701 12:14:51.391755  646525 main.go:141] libmachine: () Calling .GetVersion
I0701 12:14:51.392398  646525 main.go:141] libmachine: Using API Version  1
I0701 12:14:51.392431  646525 main.go:141] libmachine: () Calling .SetConfigRaw
I0701 12:14:51.392754  646525 main.go:141] libmachine: () Calling .GetMachineName
I0701 12:14:51.392968  646525 main.go:141] libmachine: (functional-377045) Calling .GetState
I0701 12:14:51.395107  646525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0701 12:14:51.395151  646525 main.go:141] libmachine: Launching plugin server for driver kvm2
I0701 12:14:51.411260  646525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33761
I0701 12:14:51.411751  646525 main.go:141] libmachine: () Calling .GetVersion
I0701 12:14:51.412254  646525 main.go:141] libmachine: Using API Version  1
I0701 12:14:51.412276  646525 main.go:141] libmachine: () Calling .SetConfigRaw
I0701 12:14:51.412560  646525 main.go:141] libmachine: () Calling .GetMachineName
I0701 12:14:51.412662  646525 main.go:141] libmachine: (functional-377045) Calling .DriverName
I0701 12:14:51.412807  646525 ssh_runner.go:195] Run: systemctl --version
I0701 12:14:51.412829  646525 main.go:141] libmachine: (functional-377045) Calling .GetSSHHostname
I0701 12:14:51.418269  646525 main.go:141] libmachine: (functional-377045) DBG | domain functional-377045 has defined MAC address 52:54:00:7a:6d:18 in network mk-functional-377045
I0701 12:14:51.418701  646525 main.go:141] libmachine: (functional-377045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:6d:18", ip: ""} in network mk-functional-377045: {Iface:virbr1 ExpiryTime:2024-07-01 13:11:34 +0000 UTC Type:0 Mac:52:54:00:7a:6d:18 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:functional-377045 Clientid:01:52:54:00:7a:6d:18}
I0701 12:14:51.418732  646525 main.go:141] libmachine: (functional-377045) DBG | domain functional-377045 has defined IP address 192.168.39.77 and MAC address 52:54:00:7a:6d:18 in network mk-functional-377045
I0701 12:14:51.418901  646525 main.go:141] libmachine: (functional-377045) Calling .GetSSHPort
I0701 12:14:51.419082  646525 main.go:141] libmachine: (functional-377045) Calling .GetSSHKeyPath
I0701 12:14:51.419241  646525 main.go:141] libmachine: (functional-377045) Calling .GetSSHUsername
I0701 12:14:51.419420  646525 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/functional-377045/id_rsa Username:docker}
I0701 12:14:51.500835  646525 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0701 12:14:51.533793  646525 main.go:141] libmachine: Making call to close driver server
I0701 12:14:51.533822  646525 main.go:141] libmachine: (functional-377045) Calling .Close
I0701 12:14:51.534061  646525 main.go:141] libmachine: Successfully made call to close driver server
I0701 12:14:51.534083  646525 main.go:141] libmachine: Making call to close connection to plugin binary
I0701 12:14:51.534092  646525 main.go:141] libmachine: Making call to close driver server
I0701 12:14:51.534100  646525 main.go:141] libmachine: (functional-377045) Calling .Close
I0701 12:14:51.534282  646525 main.go:141] libmachine: Successfully made call to close driver server
I0701 12:14:51.534287  646525 main.go:141] libmachine: (functional-377045) DBG | Closing plugin on server side
I0701 12:14:51.534297  646525 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-377045 image ls --format yaml --alsologtostderr:
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "149000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-377045
size: "32900000"
- id: 53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.2
size: "84700000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 530babb8b88b2ebe4c43c65c65c257f3260ae247d191c4ed41ca217efa66372f
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-377045
size: "30"
- id: e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.2
size: "111000000"
- id: 7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.2
size: "62000000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: e0c9858e10ed8be697dc2809db78c57357ffc82de88c69a3dee5d148354679ef
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.2
size: "117000000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-377045 image ls --format yaml --alsologtostderr:
I0701 12:14:51.150044  646470 out.go:291] Setting OutFile to fd 1 ...
I0701 12:14:51.150369  646470 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0701 12:14:51.150381  646470 out.go:304] Setting ErrFile to fd 2...
I0701 12:14:51.150388  646470 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0701 12:14:51.150660  646470 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19166-630650/.minikube/bin
I0701 12:14:51.152059  646470 config.go:182] Loaded profile config "functional-377045": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0701 12:14:51.152278  646470 config.go:182] Loaded profile config "functional-377045": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0701 12:14:51.153093  646470 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0701 12:14:51.153155  646470 main.go:141] libmachine: Launching plugin server for driver kvm2
I0701 12:14:51.168708  646470 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46405
I0701 12:14:51.169241  646470 main.go:141] libmachine: () Calling .GetVersion
I0701 12:14:51.169813  646470 main.go:141] libmachine: Using API Version  1
I0701 12:14:51.169838  646470 main.go:141] libmachine: () Calling .SetConfigRaw
I0701 12:14:51.170240  646470 main.go:141] libmachine: () Calling .GetMachineName
I0701 12:14:51.170601  646470 main.go:141] libmachine: (functional-377045) Calling .GetState
I0701 12:14:51.172516  646470 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0701 12:14:51.172574  646470 main.go:141] libmachine: Launching plugin server for driver kvm2
I0701 12:14:51.187486  646470 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46557
I0701 12:14:51.187995  646470 main.go:141] libmachine: () Calling .GetVersion
I0701 12:14:51.188573  646470 main.go:141] libmachine: Using API Version  1
I0701 12:14:51.188597  646470 main.go:141] libmachine: () Calling .SetConfigRaw
I0701 12:14:51.188871  646470 main.go:141] libmachine: () Calling .GetMachineName
I0701 12:14:51.189037  646470 main.go:141] libmachine: (functional-377045) Calling .DriverName
I0701 12:14:51.189225  646470 ssh_runner.go:195] Run: systemctl --version
I0701 12:14:51.189257  646470 main.go:141] libmachine: (functional-377045) Calling .GetSSHHostname
I0701 12:14:51.192066  646470 main.go:141] libmachine: (functional-377045) DBG | domain functional-377045 has defined MAC address 52:54:00:7a:6d:18 in network mk-functional-377045
I0701 12:14:51.192454  646470 main.go:141] libmachine: (functional-377045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:6d:18", ip: ""} in network mk-functional-377045: {Iface:virbr1 ExpiryTime:2024-07-01 13:11:34 +0000 UTC Type:0 Mac:52:54:00:7a:6d:18 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:functional-377045 Clientid:01:52:54:00:7a:6d:18}
I0701 12:14:51.192484  646470 main.go:141] libmachine: (functional-377045) DBG | domain functional-377045 has defined IP address 192.168.39.77 and MAC address 52:54:00:7a:6d:18 in network mk-functional-377045
I0701 12:14:51.192625  646470 main.go:141] libmachine: (functional-377045) Calling .GetSSHPort
I0701 12:14:51.192801  646470 main.go:141] libmachine: (functional-377045) Calling .GetSSHKeyPath
I0701 12:14:51.192954  646470 main.go:141] libmachine: (functional-377045) Calling .GetSSHUsername
I0701 12:14:51.193074  646470 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/functional-377045/id_rsa Username:docker}
I0701 12:14:51.289047  646470 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0701 12:14:51.318839  646470 main.go:141] libmachine: Making call to close driver server
I0701 12:14:51.318851  646470 main.go:141] libmachine: (functional-377045) Calling .Close
I0701 12:14:51.319157  646470 main.go:141] libmachine: (functional-377045) DBG | Closing plugin on server side
I0701 12:14:51.319176  646470 main.go:141] libmachine: Successfully made call to close driver server
I0701 12:14:51.319199  646470 main.go:141] libmachine: Making call to close connection to plugin binary
I0701 12:14:51.319215  646470 main.go:141] libmachine: Making call to close driver server
I0701 12:14:51.319228  646470 main.go:141] libmachine: (functional-377045) Calling .Close
I0701 12:14:51.319463  646470 main.go:141] libmachine: Successfully made call to close driver server
I0701 12:14:51.319490  646470 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-377045 ssh pgrep buildkitd: exit status 1 (202.633565ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 image build -t localhost/my-image:functional-377045 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-377045 image build -t localhost/my-image:functional-377045 testdata/build --alsologtostderr: (2.630608103s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-377045 image build -t localhost/my-image:functional-377045 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 6eee316a219b
---> Removed intermediate container 6eee316a219b
---> e6432dba4957
Step 3/3 : ADD content.txt /
---> fac30754f5cc
Successfully built fac30754f5cc
Successfully tagged localhost/my-image:functional-377045
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-377045 image build -t localhost/my-image:functional-377045 testdata/build --alsologtostderr:
I0701 12:14:51.490250  646556 out.go:291] Setting OutFile to fd 1 ...
I0701 12:14:51.490393  646556 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0701 12:14:51.490402  646556 out.go:304] Setting ErrFile to fd 2...
I0701 12:14:51.490407  646556 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0701 12:14:51.490618  646556 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19166-630650/.minikube/bin
I0701 12:14:51.491167  646556 config.go:182] Loaded profile config "functional-377045": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0701 12:14:51.491942  646556 config.go:182] Loaded profile config "functional-377045": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0701 12:14:51.492443  646556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0701 12:14:51.492492  646556 main.go:141] libmachine: Launching plugin server for driver kvm2
I0701 12:14:51.509065  646556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33629
I0701 12:14:51.509592  646556 main.go:141] libmachine: () Calling .GetVersion
I0701 12:14:51.510190  646556 main.go:141] libmachine: Using API Version  1
I0701 12:14:51.510213  646556 main.go:141] libmachine: () Calling .SetConfigRaw
I0701 12:14:51.510643  646556 main.go:141] libmachine: () Calling .GetMachineName
I0701 12:14:51.510844  646556 main.go:141] libmachine: (functional-377045) Calling .GetState
I0701 12:14:51.512956  646556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0701 12:14:51.513005  646556 main.go:141] libmachine: Launching plugin server for driver kvm2
I0701 12:14:51.533035  646556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46309
I0701 12:14:51.533515  646556 main.go:141] libmachine: () Calling .GetVersion
I0701 12:14:51.534135  646556 main.go:141] libmachine: Using API Version  1
I0701 12:14:51.534180  646556 main.go:141] libmachine: () Calling .SetConfigRaw
I0701 12:14:51.534764  646556 main.go:141] libmachine: () Calling .GetMachineName
I0701 12:14:51.534970  646556 main.go:141] libmachine: (functional-377045) Calling .DriverName
I0701 12:14:51.535182  646556 ssh_runner.go:195] Run: systemctl --version
I0701 12:14:51.535221  646556 main.go:141] libmachine: (functional-377045) Calling .GetSSHHostname
I0701 12:14:51.538373  646556 main.go:141] libmachine: (functional-377045) DBG | domain functional-377045 has defined MAC address 52:54:00:7a:6d:18 in network mk-functional-377045
I0701 12:14:51.538819  646556 main.go:141] libmachine: (functional-377045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:6d:18", ip: ""} in network mk-functional-377045: {Iface:virbr1 ExpiryTime:2024-07-01 13:11:34 +0000 UTC Type:0 Mac:52:54:00:7a:6d:18 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:functional-377045 Clientid:01:52:54:00:7a:6d:18}
I0701 12:14:51.538846  646556 main.go:141] libmachine: (functional-377045) DBG | domain functional-377045 has defined IP address 192.168.39.77 and MAC address 52:54:00:7a:6d:18 in network mk-functional-377045
I0701 12:14:51.539004  646556 main.go:141] libmachine: (functional-377045) Calling .GetSSHPort
I0701 12:14:51.539187  646556 main.go:141] libmachine: (functional-377045) Calling .GetSSHKeyPath
I0701 12:14:51.539377  646556 main.go:141] libmachine: (functional-377045) Calling .GetSSHUsername
I0701 12:14:51.539539  646556 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/functional-377045/id_rsa Username:docker}
I0701 12:14:51.625225  646556 build_images.go:161] Building image from path: /tmp/build.1579604787.tar
I0701 12:14:51.625291  646556 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0701 12:14:51.640513  646556 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1579604787.tar
I0701 12:14:51.650636  646556 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1579604787.tar: stat -c "%s %y" /var/lib/minikube/build/build.1579604787.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1579604787.tar': No such file or directory
I0701 12:14:51.650674  646556 ssh_runner.go:362] scp /tmp/build.1579604787.tar --> /var/lib/minikube/build/build.1579604787.tar (3072 bytes)
I0701 12:14:51.690163  646556 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1579604787
I0701 12:14:51.700731  646556 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1579604787 -xf /var/lib/minikube/build/build.1579604787.tar
I0701 12:14:51.710450  646556 docker.go:360] Building image: /var/lib/minikube/build/build.1579604787
I0701 12:14:51.710512  646556 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-377045 /var/lib/minikube/build/build.1579604787
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0701 12:14:54.047493  646556 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-377045 /var/lib/minikube/build/build.1579604787: (2.336946362s)
I0701 12:14:54.047608  646556 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1579604787
I0701 12:14:54.058287  646556 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1579604787.tar
I0701 12:14:54.068571  646556 build_images.go:217] Built localhost/my-image:functional-377045 from /tmp/build.1579604787.tar
I0701 12:14:54.068611  646556 build_images.go:133] succeeded building to: functional-377045
I0701 12:14:54.068618  646556 build_images.go:134] failed building to: 
I0701 12:14:54.068655  646556 main.go:141] libmachine: Making call to close driver server
I0701 12:14:54.068672  646556 main.go:141] libmachine: (functional-377045) Calling .Close
I0701 12:14:54.069003  646556 main.go:141] libmachine: (functional-377045) DBG | Closing plugin on server side
I0701 12:14:54.069014  646556 main.go:141] libmachine: Successfully made call to close driver server
I0701 12:14:54.069042  646556 main.go:141] libmachine: Making call to close connection to plugin binary
I0701 12:14:54.069052  646556 main.go:141] libmachine: Making call to close driver server
I0701 12:14:54.069061  646556 main.go:141] libmachine: (functional-377045) Calling .Close
I0701 12:14:54.069344  646556 main.go:141] libmachine: Successfully made call to close driver server
I0701 12:14:54.069363  646556 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 image ls
2024/07/01 12:14:57 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.275292766s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-377045
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 image load --daemon gcr.io/google-containers/addon-resizer:functional-377045 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-377045 image load --daemon gcr.io/google-containers/addon-resizer:functional-377045 --alsologtostderr: (4.64912281s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.85s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "254.649338ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "50.686209ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "222.081772ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "48.680837ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 image load --daemon gcr.io/google-containers/addon-resizer:functional-377045 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-377045 image load --daemon gcr.io/google-containers/addon-resizer:functional-377045 --alsologtostderr: (2.388240331s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.453703772s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-377045
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 image load --daemon gcr.io/google-containers/addon-resizer:functional-377045 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-377045 image load --daemon gcr.io/google-containers/addon-resizer:functional-377045 --alsologtostderr: (4.027509272s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 image save gcr.io/google-containers/addon-resizer:functional-377045 /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-377045 image save gcr.io/google-containers/addon-resizer:functional-377045 /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr: (1.48461586s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 image rm gcr.io/google-containers/addon-resizer:functional-377045 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 image load /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-377045 image load /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr: (1.581184692s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-377045
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 image save --daemon gcr.io/google-containers/addon-resizer:functional-377045 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-377045 image save --daemon gcr.io/google-containers/addon-resizer:functional-377045 --alsologtostderr: (1.556364258s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-377045
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (15.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-377045 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-377045 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-djwmt" [11010840-3c87-4cc7-bb90-29242f9d6efd] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-djwmt" [11010840-3c87-4cc7-bb90-29242f9d6efd] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 15.005624416s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (15.21s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-377045 /tmp/TestFunctionalparallelMountCmdany-port3295130191/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1719836080254273221" to /tmp/TestFunctionalparallelMountCmdany-port3295130191/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1719836080254273221" to /tmp/TestFunctionalparallelMountCmdany-port3295130191/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1719836080254273221" to /tmp/TestFunctionalparallelMountCmdany-port3295130191/001/test-1719836080254273221
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-377045 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (214.768106ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul  1 12:14 created-by-test
-rw-r--r-- 1 docker docker 24 Jul  1 12:14 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul  1 12:14 test-1719836080254273221
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 ssh cat /mount-9p/test-1719836080254273221
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-377045 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [db7e47e3-324b-4bf5-8c2d-2451991a3c22] Pending
helpers_test.go:344: "busybox-mount" [db7e47e3-324b-4bf5-8c2d-2451991a3c22] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [db7e47e3-324b-4bf5-8c2d-2451991a3c22] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
E0701 12:14:44.786830  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/addons-877411/client.crt: no such file or directory
helpers_test.go:344: "busybox-mount" [db7e47e3-324b-4bf5-8c2d-2451991a3c22] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004472798s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-377045 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-377045 /tmp/TestFunctionalparallelMountCmdany-port3295130191/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 service list
functional_test.go:1455: (dbg) Done: out/minikube-linux-amd64 -p functional-377045 service list: (1.252632943s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-377045 /tmp/TestFunctionalparallelMountCmdspecific-port2208257514/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-377045 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (199.476006ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-377045 /tmp/TestFunctionalparallelMountCmdspecific-port2208257514/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-377045 ssh "sudo umount -f /mount-9p": exit status 1 (221.062741ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-377045 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-377045 /tmp/TestFunctionalparallelMountCmdspecific-port2208257514/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 service list -o json
functional_test.go:1485: (dbg) Done: out/minikube-linux-amd64 -p functional-377045 service list -o json: (1.26747743s)
functional_test.go:1490: Took "1.267619077s" to run "out/minikube-linux-amd64 -p functional-377045 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-377045 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1917893242/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-377045 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1917893242/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-377045 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1917893242/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-377045 ssh "findmnt -T" /mount1: exit status 1 (213.348263ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-377045 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-377045 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1917893242/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-377045 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1917893242/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-377045 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1917893242/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.77:31692
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.77:31692
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-377045 docker-env) && out/minikube-linux-amd64 status -p functional-377045"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-377045 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-377045 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-377045
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-377045
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-377045
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestGvisorAddon (209.74s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-264306 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:52: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-264306 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m28.449987002s)
gvisor_addon_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-264306 cache add gcr.io/k8s-minikube/gvisor-addon:2
E0701 12:57:14.857332  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/skaffold-459115/client.crt: no such file or directory
gvisor_addon_test.go:58: (dbg) Done: out/minikube-linux-amd64 -p gvisor-264306 cache add gcr.io/k8s-minikube/gvisor-addon:2: (22.637882799s)
gvisor_addon_test.go:63: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-264306 addons enable gvisor
gvisor_addon_test.go:63: (dbg) Done: out/minikube-linux-amd64 -p gvisor-264306 addons enable gvisor: (3.624252336s)
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [4849b253-37e2-4e91-94ce-587cbcaf2cbd] Running
E0701 12:57:35.338503  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/skaffold-459115/client.crt: no such file or directory
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.006874918s
gvisor_addon_test.go:73: (dbg) Run:  kubectl --context gvisor-264306 replace --force -f testdata/nginx-gvisor.yaml
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [04576209-5aa6-4f8c-a302-8292b235bc88] Pending
helpers_test.go:344: "nginx-gvisor" [04576209-5aa6-4f8c-a302-8292b235bc88] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-gvisor" [04576209-5aa6-4f8c-a302-8292b235bc88] Running
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 14.004051846s
gvisor_addon_test.go:83: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-264306
gvisor_addon_test.go:83: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-264306: (7.308083334s)
gvisor_addon_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-264306 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-264306 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (55.60465614s)
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [4849b253-37e2-4e91-94ce-587cbcaf2cbd] Running
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.003989891s
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [04576209-5aa6-4f8c-a302-8292b235bc88] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.004836354s
helpers_test.go:175: Cleaning up "gvisor-264306" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-264306
--- PASS: TestGvisorAddon (209.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (206.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-735960 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2 
E0701 12:16:06.707946  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/addons-877411/client.crt: no such file or directory
E0701 12:18:22.863912  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/addons-877411/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-735960 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2 : (3m26.00159477s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (206.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-735960 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-735960 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-735960 -- rollout status deployment/busybox: (3.75536681s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-735960 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-735960 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-735960 -- exec busybox-fc5497c4f-cpsct -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-735960 -- exec busybox-fc5497c4f-pjfcw -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-735960 -- exec busybox-fc5497c4f-twnb4 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-735960 -- exec busybox-fc5497c4f-cpsct -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-735960 -- exec busybox-fc5497c4f-pjfcw -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-735960 -- exec busybox-fc5497c4f-twnb4 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-735960 -- exec busybox-fc5497c4f-cpsct -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-735960 -- exec busybox-fc5497c4f-pjfcw -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-735960 -- exec busybox-fc5497c4f-twnb4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-735960 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-735960 -- exec busybox-fc5497c4f-cpsct -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-735960 -- exec busybox-fc5497c4f-cpsct -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-735960 -- exec busybox-fc5497c4f-pjfcw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-735960 -- exec busybox-fc5497c4f-pjfcw -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-735960 -- exec busybox-fc5497c4f-twnb4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-735960 -- exec busybox-fc5497c4f-twnb4 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (49.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-735960 -v=7 --alsologtostderr
E0701 12:18:50.548312  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/addons-877411/client.crt: no such file or directory
E0701 12:19:11.881472  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/functional-377045/client.crt: no such file or directory
E0701 12:19:11.886764  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/functional-377045/client.crt: no such file or directory
E0701 12:19:11.897102  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/functional-377045/client.crt: no such file or directory
E0701 12:19:11.917438  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/functional-377045/client.crt: no such file or directory
E0701 12:19:11.957785  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/functional-377045/client.crt: no such file or directory
E0701 12:19:12.038156  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/functional-377045/client.crt: no such file or directory
E0701 12:19:12.198607  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/functional-377045/client.crt: no such file or directory
E0701 12:19:12.518920  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/functional-377045/client.crt: no such file or directory
E0701 12:19:13.159120  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/functional-377045/client.crt: no such file or directory
E0701 12:19:14.439319  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/functional-377045/client.crt: no such file or directory
E0701 12:19:16.999817  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/functional-377045/client.crt: no such file or directory
E0701 12:19:22.120515  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/functional-377045/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-735960 -v=7 --alsologtostderr: (48.933839854s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (49.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-735960 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 cp testdata/cp-test.txt ha-735960:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 ssh -n ha-735960 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 cp ha-735960:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2826819896/001/cp-test_ha-735960.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 ssh -n ha-735960 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 cp ha-735960:/home/docker/cp-test.txt ha-735960-m02:/home/docker/cp-test_ha-735960_ha-735960-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 ssh -n ha-735960 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 ssh -n ha-735960-m02 "sudo cat /home/docker/cp-test_ha-735960_ha-735960-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 cp ha-735960:/home/docker/cp-test.txt ha-735960-m03:/home/docker/cp-test_ha-735960_ha-735960-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 ssh -n ha-735960 "sudo cat /home/docker/cp-test.txt"
E0701 12:19:32.361282  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/functional-377045/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 ssh -n ha-735960-m03 "sudo cat /home/docker/cp-test_ha-735960_ha-735960-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 cp ha-735960:/home/docker/cp-test.txt ha-735960-m04:/home/docker/cp-test_ha-735960_ha-735960-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 ssh -n ha-735960 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 ssh -n ha-735960-m04 "sudo cat /home/docker/cp-test_ha-735960_ha-735960-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 cp testdata/cp-test.txt ha-735960-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 ssh -n ha-735960-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 cp ha-735960-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2826819896/001/cp-test_ha-735960-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 ssh -n ha-735960-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 cp ha-735960-m02:/home/docker/cp-test.txt ha-735960:/home/docker/cp-test_ha-735960-m02_ha-735960.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 ssh -n ha-735960-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 ssh -n ha-735960 "sudo cat /home/docker/cp-test_ha-735960-m02_ha-735960.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 cp ha-735960-m02:/home/docker/cp-test.txt ha-735960-m03:/home/docker/cp-test_ha-735960-m02_ha-735960-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 ssh -n ha-735960-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 ssh -n ha-735960-m03 "sudo cat /home/docker/cp-test_ha-735960-m02_ha-735960-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 cp ha-735960-m02:/home/docker/cp-test.txt ha-735960-m04:/home/docker/cp-test_ha-735960-m02_ha-735960-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 ssh -n ha-735960-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 ssh -n ha-735960-m04 "sudo cat /home/docker/cp-test_ha-735960-m02_ha-735960-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 cp testdata/cp-test.txt ha-735960-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 ssh -n ha-735960-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 cp ha-735960-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2826819896/001/cp-test_ha-735960-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 ssh -n ha-735960-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 cp ha-735960-m03:/home/docker/cp-test.txt ha-735960:/home/docker/cp-test_ha-735960-m03_ha-735960.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 ssh -n ha-735960-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 ssh -n ha-735960 "sudo cat /home/docker/cp-test_ha-735960-m03_ha-735960.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 cp ha-735960-m03:/home/docker/cp-test.txt ha-735960-m02:/home/docker/cp-test_ha-735960-m03_ha-735960-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 ssh -n ha-735960-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 ssh -n ha-735960-m02 "sudo cat /home/docker/cp-test_ha-735960-m03_ha-735960-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 cp ha-735960-m03:/home/docker/cp-test.txt ha-735960-m04:/home/docker/cp-test_ha-735960-m03_ha-735960-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 ssh -n ha-735960-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 ssh -n ha-735960-m04 "sudo cat /home/docker/cp-test_ha-735960-m03_ha-735960-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 cp testdata/cp-test.txt ha-735960-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 ssh -n ha-735960-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 cp ha-735960-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2826819896/001/cp-test_ha-735960-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 ssh -n ha-735960-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 cp ha-735960-m04:/home/docker/cp-test.txt ha-735960:/home/docker/cp-test_ha-735960-m04_ha-735960.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 ssh -n ha-735960-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 ssh -n ha-735960 "sudo cat /home/docker/cp-test_ha-735960-m04_ha-735960.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 cp ha-735960-m04:/home/docker/cp-test.txt ha-735960-m02:/home/docker/cp-test_ha-735960-m04_ha-735960-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 ssh -n ha-735960-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 ssh -n ha-735960-m02 "sudo cat /home/docker/cp-test_ha-735960-m04_ha-735960-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 cp ha-735960-m04:/home/docker/cp-test.txt ha-735960-m03:/home/docker/cp-test_ha-735960-m04_ha-735960-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 ssh -n ha-735960-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 ssh -n ha-735960-m03 "sudo cat /home/docker/cp-test_ha-735960-m04_ha-735960-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 node stop m02 -v=7 --alsologtostderr
E0701 12:19:52.842251  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/functional-377045/client.crt: no such file or directory
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-735960 node stop m02 -v=7 --alsologtostderr: (12.528362465s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-735960 status -v=7 --alsologtostderr: exit status 7 (632.694033ms)

                                                
                                                
-- stdout --
	ha-735960
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-735960-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-735960-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-735960-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 12:19:55.179008  651405 out.go:291] Setting OutFile to fd 1 ...
	I0701 12:19:55.179129  651405 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 12:19:55.179143  651405 out.go:304] Setting ErrFile to fd 2...
	I0701 12:19:55.179148  651405 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 12:19:55.179732  651405 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19166-630650/.minikube/bin
	I0701 12:19:55.180087  651405 out.go:298] Setting JSON to false
	I0701 12:19:55.180139  651405 mustload.go:65] Loading cluster: ha-735960
	I0701 12:19:55.180488  651405 notify.go:220] Checking for updates...
	I0701 12:19:55.181001  651405 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:19:55.181026  651405 status.go:255] checking status of ha-735960 ...
	I0701 12:19:55.181450  651405 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:19:55.181507  651405 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:19:55.197049  651405 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38099
	I0701 12:19:55.197603  651405 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:19:55.198216  651405 main.go:141] libmachine: Using API Version  1
	I0701 12:19:55.198239  651405 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:19:55.198634  651405 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:19:55.198880  651405 main.go:141] libmachine: (ha-735960) Calling .GetState
	I0701 12:19:55.200689  651405 status.go:330] ha-735960 host status = "Running" (err=<nil>)
	I0701 12:19:55.200709  651405 host.go:66] Checking if "ha-735960" exists ...
	I0701 12:19:55.200973  651405 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:19:55.201012  651405 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:19:55.215726  651405 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44005
	I0701 12:19:55.216140  651405 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:19:55.216607  651405 main.go:141] libmachine: Using API Version  1
	I0701 12:19:55.216629  651405 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:19:55.216978  651405 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:19:55.217202  651405 main.go:141] libmachine: (ha-735960) Calling .GetIP
	I0701 12:19:55.220270  651405 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:19:55.220765  651405 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:19:55.220800  651405 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:19:55.220878  651405 host.go:66] Checking if "ha-735960" exists ...
	I0701 12:19:55.221178  651405 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:19:55.221222  651405 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:19:55.236357  651405 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34575
	I0701 12:19:55.236826  651405 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:19:55.237328  651405 main.go:141] libmachine: Using API Version  1
	I0701 12:19:55.237349  651405 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:19:55.237652  651405 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:19:55.237842  651405 main.go:141] libmachine: (ha-735960) Calling .DriverName
	I0701 12:19:55.238035  651405 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0701 12:19:55.238056  651405 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
	I0701 12:19:55.240779  651405 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:19:55.241134  651405 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
	I0701 12:19:55.241168  651405 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
	I0701 12:19:55.241316  651405 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
	I0701 12:19:55.241508  651405 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
	I0701 12:19:55.241666  651405 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
	I0701 12:19:55.241776  651405 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa Username:docker}
	I0701 12:19:55.340439  651405 ssh_runner.go:195] Run: systemctl --version
	I0701 12:19:55.349098  651405 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 12:19:55.365122  651405 kubeconfig.go:125] found "ha-735960" server: "https://192.168.39.254:8443"
	I0701 12:19:55.365163  651405 api_server.go:166] Checking apiserver status ...
	I0701 12:19:55.365216  651405 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 12:19:55.381303  651405 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1977/cgroup
	W0701 12:19:55.391325  651405 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1977/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0701 12:19:55.391373  651405 ssh_runner.go:195] Run: ls
	I0701 12:19:55.395327  651405 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0701 12:19:55.400889  651405 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0701 12:19:55.400916  651405 status.go:422] ha-735960 apiserver status = Running (err=<nil>)
	I0701 12:19:55.400929  651405 status.go:257] ha-735960 status: &{Name:ha-735960 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0701 12:19:55.400949  651405 status.go:255] checking status of ha-735960-m02 ...
	I0701 12:19:55.401335  651405 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:19:55.401383  651405 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:19:55.417121  651405 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45513
	I0701 12:19:55.417628  651405 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:19:55.418351  651405 main.go:141] libmachine: Using API Version  1
	I0701 12:19:55.418387  651405 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:19:55.418832  651405 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:19:55.419045  651405 main.go:141] libmachine: (ha-735960-m02) Calling .GetState
	I0701 12:19:55.421065  651405 status.go:330] ha-735960-m02 host status = "Stopped" (err=<nil>)
	I0701 12:19:55.421082  651405 status.go:343] host is not running, skipping remaining checks
	I0701 12:19:55.421090  651405 status.go:257] ha-735960-m02 status: &{Name:ha-735960-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0701 12:19:55.421112  651405 status.go:255] checking status of ha-735960-m03 ...
	I0701 12:19:55.421430  651405 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:19:55.421484  651405 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:19:55.437898  651405 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41273
	I0701 12:19:55.438416  651405 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:19:55.438926  651405 main.go:141] libmachine: Using API Version  1
	I0701 12:19:55.438951  651405 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:19:55.439295  651405 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:19:55.439456  651405 main.go:141] libmachine: (ha-735960-m03) Calling .GetState
	I0701 12:19:55.440854  651405 status.go:330] ha-735960-m03 host status = "Running" (err=<nil>)
	I0701 12:19:55.440874  651405 host.go:66] Checking if "ha-735960-m03" exists ...
	I0701 12:19:55.441191  651405 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:19:55.441236  651405 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:19:55.455852  651405 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36019
	I0701 12:19:55.456279  651405 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:19:55.456746  651405 main.go:141] libmachine: Using API Version  1
	I0701 12:19:55.456764  651405 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:19:55.457088  651405 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:19:55.457305  651405 main.go:141] libmachine: (ha-735960-m03) Calling .GetIP
	I0701 12:19:55.459960  651405 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:19:55.460380  651405 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:17:32 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:19:55.460416  651405 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:19:55.460573  651405 host.go:66] Checking if "ha-735960-m03" exists ...
	I0701 12:19:55.460904  651405 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:19:55.460959  651405 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:19:55.475626  651405 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33649
	I0701 12:19:55.476043  651405 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:19:55.476589  651405 main.go:141] libmachine: Using API Version  1
	I0701 12:19:55.476608  651405 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:19:55.476939  651405 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:19:55.477160  651405 main.go:141] libmachine: (ha-735960-m03) Calling .DriverName
	I0701 12:19:55.477348  651405 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0701 12:19:55.477377  651405 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHHostname
	I0701 12:19:55.480288  651405 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:19:55.480742  651405 main.go:141] libmachine: (ha-735960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:88:f2", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:17:32 +0000 UTC Type:0 Mac:52:54:00:93:88:f2 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-735960-m03 Clientid:01:52:54:00:93:88:f2}
	I0701 12:19:55.480770  651405 main.go:141] libmachine: (ha-735960-m03) DBG | domain ha-735960-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:93:88:f2 in network mk-ha-735960
	I0701 12:19:55.480884  651405 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHPort
	I0701 12:19:55.481070  651405 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHKeyPath
	I0701 12:19:55.481279  651405 main.go:141] libmachine: (ha-735960-m03) Calling .GetSSHUsername
	I0701 12:19:55.481434  651405 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m03/id_rsa Username:docker}
	I0701 12:19:55.557306  651405 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 12:19:55.574645  651405 kubeconfig.go:125] found "ha-735960" server: "https://192.168.39.254:8443"
	I0701 12:19:55.574680  651405 api_server.go:166] Checking apiserver status ...
	I0701 12:19:55.574725  651405 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 12:19:55.589304  651405 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1976/cgroup
	W0701 12:19:55.599560  651405 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1976/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0701 12:19:55.599626  651405 ssh_runner.go:195] Run: ls
	I0701 12:19:55.603984  651405 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0701 12:19:55.608373  651405 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0701 12:19:55.608396  651405 status.go:422] ha-735960-m03 apiserver status = Running (err=<nil>)
	I0701 12:19:55.608406  651405 status.go:257] ha-735960-m03 status: &{Name:ha-735960-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0701 12:19:55.608420  651405 status.go:255] checking status of ha-735960-m04 ...
	I0701 12:19:55.608755  651405 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:19:55.608793  651405 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:19:55.626295  651405 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39789
	I0701 12:19:55.626706  651405 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:19:55.627230  651405 main.go:141] libmachine: Using API Version  1
	I0701 12:19:55.627251  651405 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:19:55.627587  651405 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:19:55.627811  651405 main.go:141] libmachine: (ha-735960-m04) Calling .GetState
	I0701 12:19:55.629419  651405 status.go:330] ha-735960-m04 host status = "Running" (err=<nil>)
	I0701 12:19:55.629459  651405 host.go:66] Checking if "ha-735960-m04" exists ...
	I0701 12:19:55.629758  651405 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:19:55.629803  651405 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:19:55.644515  651405 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43249
	I0701 12:19:55.644988  651405 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:19:55.645461  651405 main.go:141] libmachine: Using API Version  1
	I0701 12:19:55.645491  651405 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:19:55.645842  651405 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:19:55.646046  651405 main.go:141] libmachine: (ha-735960-m04) Calling .GetIP
	I0701 12:19:55.649183  651405 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:19:55.649632  651405 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:18:53 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:19:55.649662  651405 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:19:55.649815  651405 host.go:66] Checking if "ha-735960-m04" exists ...
	I0701 12:19:55.650121  651405 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:19:55.650159  651405 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:19:55.665407  651405 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45935
	I0701 12:19:55.665808  651405 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:19:55.666261  651405 main.go:141] libmachine: Using API Version  1
	I0701 12:19:55.666284  651405 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:19:55.666741  651405 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:19:55.667005  651405 main.go:141] libmachine: (ha-735960-m04) Calling .DriverName
	I0701 12:19:55.667216  651405 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0701 12:19:55.667242  651405 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHHostname
	I0701 12:19:55.669956  651405 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:19:55.670429  651405 main.go:141] libmachine: (ha-735960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8e:6d", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:18:53 +0000 UTC Type:0 Mac:52:54:00:2d:8e:6d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-735960-m04 Clientid:01:52:54:00:2d:8e:6d}
	I0701 12:19:55.670457  651405 main.go:141] libmachine: (ha-735960-m04) DBG | domain ha-735960-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:2d:8e:6d in network mk-ha-735960
	I0701 12:19:55.670587  651405 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHPort
	I0701 12:19:55.670788  651405 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHKeyPath
	I0701 12:19:55.670946  651405 main.go:141] libmachine: (ha-735960-m04) Calling .GetSSHUsername
	I0701 12:19:55.671069  651405 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m04/id_rsa Username:docker}
	I0701 12:19:55.753842  651405 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 12:19:55.767627  651405 status.go:257] ha-735960-m04 status: &{Name:ha-735960-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (36.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-735960 node start m02 -v=7 --alsologtostderr: (35.474051809s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-735960 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (36.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.53s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (49.82s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-037909 --driver=kvm2 
E0701 12:29:45.909214  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/addons-877411/client.crt: no such file or directory
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-037909 --driver=kvm2 : (49.818160222s)
--- PASS: TestImageBuild/serial/Setup (49.82s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.49s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-037909
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-037909: (1.487318901s)
--- PASS: TestImageBuild/serial/NormalBuild (1.49s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.03s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-037909
image_test.go:99: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-037909: (1.025670847s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.03s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.41s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-037909
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.41s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.29s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-037909
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.29s)

                                                
                                    
x
+
TestJSONOutput/start/Command (63.45s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-445712 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-445712 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 : (1m3.449633243s)
--- PASS: TestJSONOutput/start/Command (63.45s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.58s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-445712 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.58s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.51s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-445712 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.51s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.51s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-445712 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-445712 --output=json --user=testUser: (7.508346417s)
--- PASS: TestJSONOutput/stop/Command (7.51s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-357748 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-357748 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (62.709635ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"39ed3568-faa7-4786-b571-aebb15b0d33f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-357748] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5ba24a35-6e9f-4ab5-b007-e368de25c18c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19166"}}
	{"specversion":"1.0","id":"08d3c705-27e3-4b09-9dda-89a25a0d56c0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6b8698a7-a23e-476c-bb29-96bfd0206783","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19166-630650/kubeconfig"}}
	{"specversion":"1.0","id":"72d1380f-0f57-4c18-9044-23434ae54f58","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19166-630650/.minikube"}}
	{"specversion":"1.0","id":"4d73368a-17f6-4227-a47f-fce872a8395e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"0ad3332a-8bfe-4e88-9ae5-6301a78e8c40","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c588f990-b453-4af4-bb66-e62e61728006","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-357748" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-357748
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (100.85s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-005755 --driver=kvm2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-005755 --driver=kvm2 : (49.641382938s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-008861 --driver=kvm2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-008861 --driver=kvm2 : (48.711508474s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-005755
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-008861
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-008861" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-008861
helpers_test.go:175: Cleaning up "first-005755" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-005755
--- PASS: TestMinikubeProfile (100.85s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.35s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-377204 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 
E0701 12:33:22.863690  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/addons-877411/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-377204 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 : (26.346671812s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.35s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-377204 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-377204 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (29.05s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-396436 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-396436 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 : (28.048083121s)
--- PASS: TestMountStart/serial/StartWithMountSecond (29.05s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-396436 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-396436 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-377204 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-396436 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-396436 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.40s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.47s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-396436
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-396436: (2.473515404s)
--- PASS: TestMountStart/serial/Stop (2.47s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (25.91s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-396436
E0701 12:34:11.881684  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/functional-377045/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-396436: (24.908889897s)
--- PASS: TestMountStart/serial/RestartStopped (25.91s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-396436 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-396436 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (117.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-033979 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 
E0701 12:35:34.926399  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/functional-377045/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-033979 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 : (1m57.399972566s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033979 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (117.79s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-033979 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-033979 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-033979 -- rollout status deployment/busybox: (3.025059825s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-033979 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-033979 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-033979 -- exec busybox-fc5497c4f-pnl6l -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-033979 -- exec busybox-fc5497c4f-rmr52 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-033979 -- exec busybox-fc5497c4f-pnl6l -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-033979 -- exec busybox-fc5497c4f-rmr52 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-033979 -- exec busybox-fc5497c4f-pnl6l -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-033979 -- exec busybox-fc5497c4f-rmr52 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.60s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-033979 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-033979 -- exec busybox-fc5497c4f-pnl6l -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-033979 -- exec busybox-fc5497c4f-pnl6l -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-033979 -- exec busybox-fc5497c4f-rmr52 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-033979 -- exec busybox-fc5497c4f-rmr52 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.85s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (48.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-033979 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-033979 -v 3 --alsologtostderr: (48.030660168s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033979 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (48.58s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-033979 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033979 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033979 cp testdata/cp-test.txt multinode-033979:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033979 ssh -n multinode-033979 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033979 cp multinode-033979:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3044330709/001/cp-test_multinode-033979.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033979 ssh -n multinode-033979 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033979 cp multinode-033979:/home/docker/cp-test.txt multinode-033979-m02:/home/docker/cp-test_multinode-033979_multinode-033979-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033979 ssh -n multinode-033979 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033979 ssh -n multinode-033979-m02 "sudo cat /home/docker/cp-test_multinode-033979_multinode-033979-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033979 cp multinode-033979:/home/docker/cp-test.txt multinode-033979-m03:/home/docker/cp-test_multinode-033979_multinode-033979-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033979 ssh -n multinode-033979 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033979 ssh -n multinode-033979-m03 "sudo cat /home/docker/cp-test_multinode-033979_multinode-033979-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033979 cp testdata/cp-test.txt multinode-033979-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033979 ssh -n multinode-033979-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033979 cp multinode-033979-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3044330709/001/cp-test_multinode-033979-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033979 ssh -n multinode-033979-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033979 cp multinode-033979-m02:/home/docker/cp-test.txt multinode-033979:/home/docker/cp-test_multinode-033979-m02_multinode-033979.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033979 ssh -n multinode-033979-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033979 ssh -n multinode-033979 "sudo cat /home/docker/cp-test_multinode-033979-m02_multinode-033979.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033979 cp multinode-033979-m02:/home/docker/cp-test.txt multinode-033979-m03:/home/docker/cp-test_multinode-033979-m02_multinode-033979-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033979 ssh -n multinode-033979-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033979 ssh -n multinode-033979-m03 "sudo cat /home/docker/cp-test_multinode-033979-m02_multinode-033979-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033979 cp testdata/cp-test.txt multinode-033979-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033979 ssh -n multinode-033979-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033979 cp multinode-033979-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3044330709/001/cp-test_multinode-033979-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033979 ssh -n multinode-033979-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033979 cp multinode-033979-m03:/home/docker/cp-test.txt multinode-033979:/home/docker/cp-test_multinode-033979-m03_multinode-033979.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033979 ssh -n multinode-033979-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033979 ssh -n multinode-033979 "sudo cat /home/docker/cp-test_multinode-033979-m03_multinode-033979.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033979 cp multinode-033979-m03:/home/docker/cp-test.txt multinode-033979-m02:/home/docker/cp-test_multinode-033979-m03_multinode-033979-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033979 ssh -n multinode-033979-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033979 ssh -n multinode-033979-m02 "sudo cat /home/docker/cp-test_multinode-033979-m03_multinode-033979-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.14s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033979 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-033979 node stop m03: (2.436264864s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033979 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-033979 status: exit status 7 (417.599142ms)

                                                
                                                
-- stdout --
	multinode-033979
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-033979-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-033979-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033979 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-033979 status --alsologtostderr: exit status 7 (416.699269ms)

                                                
                                                
-- stdout --
	multinode-033979
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-033979-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-033979-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 12:37:36.280413  662667 out.go:291] Setting OutFile to fd 1 ...
	I0701 12:37:36.280841  662667 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 12:37:36.280858  662667 out.go:304] Setting ErrFile to fd 2...
	I0701 12:37:36.280867  662667 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 12:37:36.281355  662667 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19166-630650/.minikube/bin
	I0701 12:37:36.281664  662667 out.go:298] Setting JSON to false
	I0701 12:37:36.281702  662667 mustload.go:65] Loading cluster: multinode-033979
	I0701 12:37:36.281989  662667 notify.go:220] Checking for updates...
	I0701 12:37:36.282626  662667 config.go:182] Loaded profile config "multinode-033979": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:37:36.282653  662667 status.go:255] checking status of multinode-033979 ...
	I0701 12:37:36.283072  662667 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:37:36.283139  662667 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:37:36.299014  662667 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34669
	I0701 12:37:36.299557  662667 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:37:36.300235  662667 main.go:141] libmachine: Using API Version  1
	I0701 12:37:36.300261  662667 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:37:36.300602  662667 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:37:36.300838  662667 main.go:141] libmachine: (multinode-033979) Calling .GetState
	I0701 12:37:36.302706  662667 status.go:330] multinode-033979 host status = "Running" (err=<nil>)
	I0701 12:37:36.302726  662667 host.go:66] Checking if "multinode-033979" exists ...
	I0701 12:37:36.303078  662667 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:37:36.303123  662667 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:37:36.320094  662667 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45387
	I0701 12:37:36.320576  662667 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:37:36.321102  662667 main.go:141] libmachine: Using API Version  1
	I0701 12:37:36.321125  662667 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:37:36.321445  662667 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:37:36.321665  662667 main.go:141] libmachine: (multinode-033979) Calling .GetIP
	I0701 12:37:36.324463  662667 main.go:141] libmachine: (multinode-033979) DBG | domain multinode-033979 has defined MAC address 52:54:00:43:29:3f in network mk-multinode-033979
	I0701 12:37:36.324891  662667 main.go:141] libmachine: (multinode-033979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:29:3f", ip: ""} in network mk-multinode-033979: {Iface:virbr1 ExpiryTime:2024-07-01 13:34:47 +0000 UTC Type:0 Mac:52:54:00:43:29:3f Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:multinode-033979 Clientid:01:52:54:00:43:29:3f}
	I0701 12:37:36.324926  662667 main.go:141] libmachine: (multinode-033979) DBG | domain multinode-033979 has defined IP address 192.168.39.225 and MAC address 52:54:00:43:29:3f in network mk-multinode-033979
	I0701 12:37:36.325058  662667 host.go:66] Checking if "multinode-033979" exists ...
	I0701 12:37:36.325348  662667 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:37:36.325408  662667 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:37:36.340950  662667 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45843
	I0701 12:37:36.341380  662667 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:37:36.341877  662667 main.go:141] libmachine: Using API Version  1
	I0701 12:37:36.341899  662667 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:37:36.342253  662667 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:37:36.342492  662667 main.go:141] libmachine: (multinode-033979) Calling .DriverName
	I0701 12:37:36.342700  662667 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0701 12:37:36.342730  662667 main.go:141] libmachine: (multinode-033979) Calling .GetSSHHostname
	I0701 12:37:36.345844  662667 main.go:141] libmachine: (multinode-033979) DBG | domain multinode-033979 has defined MAC address 52:54:00:43:29:3f in network mk-multinode-033979
	I0701 12:37:36.346304  662667 main.go:141] libmachine: (multinode-033979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:29:3f", ip: ""} in network mk-multinode-033979: {Iface:virbr1 ExpiryTime:2024-07-01 13:34:47 +0000 UTC Type:0 Mac:52:54:00:43:29:3f Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:multinode-033979 Clientid:01:52:54:00:43:29:3f}
	I0701 12:37:36.346360  662667 main.go:141] libmachine: (multinode-033979) DBG | domain multinode-033979 has defined IP address 192.168.39.225 and MAC address 52:54:00:43:29:3f in network mk-multinode-033979
	I0701 12:37:36.346461  662667 main.go:141] libmachine: (multinode-033979) Calling .GetSSHPort
	I0701 12:37:36.346647  662667 main.go:141] libmachine: (multinode-033979) Calling .GetSSHKeyPath
	I0701 12:37:36.346801  662667 main.go:141] libmachine: (multinode-033979) Calling .GetSSHUsername
	I0701 12:37:36.346921  662667 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/multinode-033979/id_rsa Username:docker}
	I0701 12:37:36.425840  662667 ssh_runner.go:195] Run: systemctl --version
	I0701 12:37:36.431730  662667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 12:37:36.446870  662667 kubeconfig.go:125] found "multinode-033979" server: "https://192.168.39.225:8443"
	I0701 12:37:36.446910  662667 api_server.go:166] Checking apiserver status ...
	I0701 12:37:36.446952  662667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 12:37:36.460925  662667 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1944/cgroup
	W0701 12:37:36.470185  662667 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1944/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0701 12:37:36.470247  662667 ssh_runner.go:195] Run: ls
	I0701 12:37:36.474706  662667 api_server.go:253] Checking apiserver healthz at https://192.168.39.225:8443/healthz ...
	I0701 12:37:36.478891  662667 api_server.go:279] https://192.168.39.225:8443/healthz returned 200:
	ok
	I0701 12:37:36.478919  662667 status.go:422] multinode-033979 apiserver status = Running (err=<nil>)
	I0701 12:37:36.478930  662667 status.go:257] multinode-033979 status: &{Name:multinode-033979 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0701 12:37:36.478956  662667 status.go:255] checking status of multinode-033979-m02 ...
	I0701 12:37:36.479261  662667 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:37:36.479301  662667 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:37:36.494832  662667 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39449
	I0701 12:37:36.495346  662667 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:37:36.495864  662667 main.go:141] libmachine: Using API Version  1
	I0701 12:37:36.495884  662667 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:37:36.496198  662667 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:37:36.496421  662667 main.go:141] libmachine: (multinode-033979-m02) Calling .GetState
	I0701 12:37:36.497990  662667 status.go:330] multinode-033979-m02 host status = "Running" (err=<nil>)
	I0701 12:37:36.498007  662667 host.go:66] Checking if "multinode-033979-m02" exists ...
	I0701 12:37:36.498308  662667 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:37:36.498370  662667 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:37:36.514673  662667 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34123
	I0701 12:37:36.515084  662667 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:37:36.515688  662667 main.go:141] libmachine: Using API Version  1
	I0701 12:37:36.515738  662667 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:37:36.516087  662667 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:37:36.516326  662667 main.go:141] libmachine: (multinode-033979-m02) Calling .GetIP
	I0701 12:37:36.518947  662667 main.go:141] libmachine: (multinode-033979-m02) DBG | domain multinode-033979-m02 has defined MAC address 52:54:00:0b:97:c7 in network mk-multinode-033979
	I0701 12:37:36.519397  662667 main.go:141] libmachine: (multinode-033979-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:97:c7", ip: ""} in network mk-multinode-033979: {Iface:virbr1 ExpiryTime:2024-07-01 13:36:01 +0000 UTC Type:0 Mac:52:54:00:0b:97:c7 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-033979-m02 Clientid:01:52:54:00:0b:97:c7}
	I0701 12:37:36.519429  662667 main.go:141] libmachine: (multinode-033979-m02) DBG | domain multinode-033979-m02 has defined IP address 192.168.39.231 and MAC address 52:54:00:0b:97:c7 in network mk-multinode-033979
	I0701 12:37:36.519545  662667 host.go:66] Checking if "multinode-033979-m02" exists ...
	I0701 12:37:36.519955  662667 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:37:36.520001  662667 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:37:36.535737  662667 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40385
	I0701 12:37:36.536214  662667 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:37:36.536790  662667 main.go:141] libmachine: Using API Version  1
	I0701 12:37:36.536815  662667 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:37:36.537121  662667 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:37:36.537339  662667 main.go:141] libmachine: (multinode-033979-m02) Calling .DriverName
	I0701 12:37:36.537576  662667 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0701 12:37:36.537602  662667 main.go:141] libmachine: (multinode-033979-m02) Calling .GetSSHHostname
	I0701 12:37:36.540881  662667 main.go:141] libmachine: (multinode-033979-m02) DBG | domain multinode-033979-m02 has defined MAC address 52:54:00:0b:97:c7 in network mk-multinode-033979
	I0701 12:37:36.541440  662667 main.go:141] libmachine: (multinode-033979-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:97:c7", ip: ""} in network mk-multinode-033979: {Iface:virbr1 ExpiryTime:2024-07-01 13:36:01 +0000 UTC Type:0 Mac:52:54:00:0b:97:c7 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-033979-m02 Clientid:01:52:54:00:0b:97:c7}
	I0701 12:37:36.541475  662667 main.go:141] libmachine: (multinode-033979-m02) DBG | domain multinode-033979-m02 has defined IP address 192.168.39.231 and MAC address 52:54:00:0b:97:c7 in network mk-multinode-033979
	I0701 12:37:36.541611  662667 main.go:141] libmachine: (multinode-033979-m02) Calling .GetSSHPort
	I0701 12:37:36.541837  662667 main.go:141] libmachine: (multinode-033979-m02) Calling .GetSSHKeyPath
	I0701 12:37:36.542011  662667 main.go:141] libmachine: (multinode-033979-m02) Calling .GetSSHUsername
	I0701 12:37:36.542205  662667 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/multinode-033979-m02/id_rsa Username:docker}
	I0701 12:37:36.617320  662667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 12:37:36.630779  662667 status.go:257] multinode-033979-m02 status: &{Name:multinode-033979-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0701 12:37:36.630855  662667 status.go:255] checking status of multinode-033979-m03 ...
	I0701 12:37:36.631193  662667 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:37:36.631248  662667 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:37:36.647126  662667 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43533
	I0701 12:37:36.647623  662667 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:37:36.648139  662667 main.go:141] libmachine: Using API Version  1
	I0701 12:37:36.648163  662667 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:37:36.648554  662667 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:37:36.648792  662667 main.go:141] libmachine: (multinode-033979-m03) Calling .GetState
	I0701 12:37:36.650397  662667 status.go:330] multinode-033979-m03 host status = "Stopped" (err=<nil>)
	I0701 12:37:36.650421  662667 status.go:343] host is not running, skipping remaining checks
	I0701 12:37:36.650429  662667 status.go:257] multinode-033979-m03 status: &{Name:multinode-033979-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.27s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (32.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033979 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-033979 node start m03 -v=7 --alsologtostderr: (31.631906749s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033979 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (32.27s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (260.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-033979
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-033979
E0701 12:38:22.863022  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/addons-877411/client.crt: no such file or directory
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-033979: (28.043017524s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-033979 --wait=true -v=8 --alsologtostderr
E0701 12:39:11.880485  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/functional-377045/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-033979 --wait=true -v=8 --alsologtostderr: (3m52.060453512s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-033979
--- PASS: TestMultiNode/serial/RestartKeepsNodes (260.20s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033979 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-033979 node delete m03: (1.838141313s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033979 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.35s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (25.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033979 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-033979 stop: (24.934787778s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033979 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-033979 status: exit status 7 (80.243031ms)

                                                
                                                
-- stdout --
	multinode-033979
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-033979-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033979 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-033979 status --alsologtostderr: exit status 7 (82.244597ms)

                                                
                                                
-- stdout --
	multinode-033979
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-033979-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 12:42:56.528645  664695 out.go:291] Setting OutFile to fd 1 ...
	I0701 12:42:56.528885  664695 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 12:42:56.528893  664695 out.go:304] Setting ErrFile to fd 2...
	I0701 12:42:56.528897  664695 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 12:42:56.529055  664695 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19166-630650/.minikube/bin
	I0701 12:42:56.529209  664695 out.go:298] Setting JSON to false
	I0701 12:42:56.529239  664695 mustload.go:65] Loading cluster: multinode-033979
	I0701 12:42:56.529273  664695 notify.go:220] Checking for updates...
	I0701 12:42:56.529598  664695 config.go:182] Loaded profile config "multinode-033979": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0701 12:42:56.529614  664695 status.go:255] checking status of multinode-033979 ...
	I0701 12:42:56.530007  664695 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:42:56.530055  664695 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:42:56.544687  664695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43283
	I0701 12:42:56.545145  664695 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:42:56.545742  664695 main.go:141] libmachine: Using API Version  1
	I0701 12:42:56.545763  664695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:42:56.546177  664695 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:42:56.546433  664695 main.go:141] libmachine: (multinode-033979) Calling .GetState
	I0701 12:42:56.548065  664695 status.go:330] multinode-033979 host status = "Stopped" (err=<nil>)
	I0701 12:42:56.548077  664695 status.go:343] host is not running, skipping remaining checks
	I0701 12:42:56.548083  664695 status.go:257] multinode-033979 status: &{Name:multinode-033979 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0701 12:42:56.548112  664695 status.go:255] checking status of multinode-033979-m02 ...
	I0701 12:42:56.548390  664695 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0701 12:42:56.548428  664695 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0701 12:42:56.563187  664695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44589
	I0701 12:42:56.563571  664695 main.go:141] libmachine: () Calling .GetVersion
	I0701 12:42:56.564017  664695 main.go:141] libmachine: Using API Version  1
	I0701 12:42:56.564045  664695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0701 12:42:56.564397  664695 main.go:141] libmachine: () Calling .GetMachineName
	I0701 12:42:56.564613  664695 main.go:141] libmachine: (multinode-033979-m02) Calling .GetState
	I0701 12:42:56.565942  664695 status.go:330] multinode-033979-m02 host status = "Stopped" (err=<nil>)
	I0701 12:42:56.565955  664695 status.go:343] host is not running, skipping remaining checks
	I0701 12:42:56.565961  664695 status.go:257] multinode-033979-m02 status: &{Name:multinode-033979-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (25.10s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (87.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-033979 --wait=true -v=8 --alsologtostderr --driver=kvm2 
E0701 12:43:22.863393  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/addons-877411/client.crt: no such file or directory
E0701 12:44:11.881233  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/functional-377045/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-033979 --wait=true -v=8 --alsologtostderr --driver=kvm2 : (1m27.066468469s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033979 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (87.57s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (48.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-033979
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-033979-m02 --driver=kvm2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-033979-m02 --driver=kvm2 : exit status 14 (62.168617ms)

                                                
                                                
-- stdout --
	* [multinode-033979-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19166
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19166-630650/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19166-630650/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-033979-m02' is duplicated with machine name 'multinode-033979-m02' in profile 'multinode-033979'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-033979-m03 --driver=kvm2 
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-033979-m03 --driver=kvm2 : (47.674143379s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-033979
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-033979: exit status 80 (215.395013ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-033979 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-033979-m03 already exists in multinode-033979-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-033979-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (48.99s)

                                                
                                    
x
+
TestPreload (150.88s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-380213 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4
E0701 12:46:25.909614  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/addons-877411/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-380213 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4: (1m22.374552838s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-380213 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-380213 image pull gcr.io/k8s-minikube/busybox: (1.199888663s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-380213
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-380213: (12.566908077s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-380213 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-380213 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 : (53.463504352s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-380213 image list
helpers_test.go:175: Cleaning up "test-preload-380213" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-380213
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-380213: (1.079337224s)
--- PASS: TestPreload (150.88s)

                                                
                                    
x
+
TestScheduledStopUnix (120.53s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-066909 --memory=2048 --driver=kvm2 
E0701 12:48:22.863759  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/addons-877411/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-066909 --memory=2048 --driver=kvm2 : (48.947498825s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-066909 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-066909 -n scheduled-stop-066909
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-066909 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-066909 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-066909 -n scheduled-stop-066909
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-066909
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-066909 --schedule 15s
E0701 12:49:11.881295  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/functional-377045/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-066909
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-066909: exit status 7 (64.508816ms)

                                                
                                                
-- stdout --
	scheduled-stop-066909
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-066909 -n scheduled-stop-066909
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-066909 -n scheduled-stop-066909: exit status 7 (65.796221ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-066909" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-066909
--- PASS: TestScheduledStopUnix (120.53s)

                                                
                                    
x
+
TestSkaffold (140.47s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe3252492743 version
skaffold_test.go:63: skaffold version: v2.12.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-459115 --memory=2600 --driver=kvm2 
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-459115 --memory=2600 --driver=kvm2 : (50.953220165s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe3252492743 run --minikube-profile skaffold-459115 --kube-context skaffold-459115 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe3252492743 run --minikube-profile skaffold-459115 --kube-context skaffold-459115 --status-check=true --port-forward=false --interactive=false: (1m16.063764819s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-595d49fcb9-zg4dr" [01fa8652-8588-4749-bfd9-0bd86ba32cd6] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.00323729s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-65cf4b557b-jrtx2" [d454e943-8a6a-4a7b-aafe-738d237b6cff] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003329508s
helpers_test.go:175: Cleaning up "skaffold-459115" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-459115
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-459115: (1.207298935s)
--- PASS: TestSkaffold (140.47s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (191.32s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1832696308 start -p running-upgrade-571833 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1832696308 start -p running-upgrade-571833 --memory=2200 --vm-driver=kvm2 : (2m9.653933025s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-571833 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-571833 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m0.134289487s)
helpers_test.go:175: Cleaning up "running-upgrade-571833" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-571833
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-571833: (1.118637555s)
--- PASS: TestRunningBinaryUpgrade (191.32s)

                                                
                                    
x
+
TestKubernetesUpgrade (207.05s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-635930 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2 
E0701 12:52:14.927046  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/functional-377045/client.crt: no such file or directory
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-635930 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2 : (1m45.142757653s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-635930
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-635930: (3.355028039s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-635930 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-635930 status --format={{.Host}}: exit status 7 (65.164329ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-635930 --memory=2200 --kubernetes-version=v1.30.2 --alsologtostderr -v=1 --driver=kvm2 
E0701 12:54:11.881073  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/functional-377045/client.crt: no such file or directory
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-635930 --memory=2200 --kubernetes-version=v1.30.2 --alsologtostderr -v=1 --driver=kvm2 : (54.295771654s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-635930 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-635930 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-635930 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2 : exit status 106 (113.555548ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-635930] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19166
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19166-630650/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19166-630650/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-635930
	    minikube start -p kubernetes-upgrade-635930 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6359302 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.2, by running:
	    
	    minikube start -p kubernetes-upgrade-635930 --kubernetes-version=v1.30.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-635930 --memory=2200 --kubernetes-version=v1.30.2 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-635930 --memory=2200 --kubernetes-version=v1.30.2 --alsologtostderr -v=1 --driver=kvm2 : (43.004850083s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-635930" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-635930
--- PASS: TestKubernetesUpgrade (207.05s)

                                                
                                    
x
+
TestPause/serial/Start (91.32s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-627616 --memory=2048 --install-addons=false --wait=all --driver=kvm2 
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-627616 --memory=2048 --install-addons=false --wait=all --driver=kvm2 : (1m31.319454771s)
--- PASS: TestPause/serial/Start (91.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-377243 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-377243 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 : exit status 14 (63.456512ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-377243] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19166
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19166-630650/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19166-630650/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (74.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-377243 --driver=kvm2 
E0701 12:53:22.863396  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/addons-877411/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-377243 --driver=kvm2 : (1m14.460753082s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-377243 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (74.74s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (71.51s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-627616 --alsologtostderr -v=1 --driver=kvm2 
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-627616 --alsologtostderr -v=1 --driver=kvm2 : (1m11.486630478s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (71.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-377243 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-377243 --no-kubernetes --driver=kvm2 : (7.009679409s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-377243 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-377243 status -o json: exit status 2 (262.419253ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-377243","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-377243
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (28.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-377243 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-377243 --no-kubernetes --driver=kvm2 : (28.687030825s)
--- PASS: TestNoKubernetes/serial/Start (28.69s)

                                                
                                    
x
+
TestPause/serial/Pause (0.62s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-627616 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.62s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.25s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-627616 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-627616 --output=json --layout=cluster: exit status 2 (248.606463ms)

                                                
                                                
-- stdout --
	{"Name":"pause-627616","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-627616","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.25s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.6s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-627616 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.60s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.92s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-627616 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.92s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.09s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-627616 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-627616 --alsologtostderr -v=5: (1.093132232s)
--- PASS: TestPause/serial/DeletePaused (1.09s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.56s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.56s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.37s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.37s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (165.69s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.717379923 start -p stopped-upgrade-095771 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.717379923 start -p stopped-upgrade-095771 --memory=2200 --vm-driver=kvm2 : (1m8.292042377s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.717379923 -p stopped-upgrade-095771 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.717379923 -p stopped-upgrade-095771 stop: (12.447696391s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-095771 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-095771 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m24.952150254s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (165.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-377243 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-377243 "sudo systemctl is-active --quiet service kubelet": exit status 1 (215.36814ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (14.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (14.527906504s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (14.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-377243
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-377243: (2.311244063s)
--- PASS: TestNoKubernetes/serial/Stop (2.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (56.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-377243 --driver=kvm2 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-377243 --driver=kvm2 : (56.095950396s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (56.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-377243 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-377243 "sudo systemctl is-active --quiet service kubelet": exit status 1 (228.469288ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.41s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-095771
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-095771: (1.413163319s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (80.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-262175 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-262175 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 : (1m20.183329691s)
--- PASS: TestNetworkPlugins/group/auto/Start (80.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (105.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-262175 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 
E0701 12:59:11.880818  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/functional-377045/client.crt: no such file or directory
E0701 12:59:38.220024  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/skaffold-459115/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-262175 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 : (1m45.282284686s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (105.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (115.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-262175 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-262175 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 : (1m55.42541599s)
--- PASS: TestNetworkPlugins/group/calico/Start (115.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-262175 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-262175 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-75pqs" [fc6b4dc7-518a-4c28-a68c-535eded8acb5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-75pqs" [fc6b4dc7-518a-4c28-a68c-535eded8acb5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.00413391s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-262175 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-262175 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-262175 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (76.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-262175 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-262175 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 : (1m16.983203733s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (76.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-xhkkf" [1de826cb-04ad-43ee-90cd-f72d5c569a93] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005246199s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-262175 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-262175 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-52bmf" [3eef2c46-3502-4513-a4b7-f08dbafb53ce] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-52bmf" [3eef2c46-3502-4513-a4b7-f08dbafb53ce] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004128076s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-262175 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-262175 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-262175 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (79.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-262175 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-262175 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 : (1m19.648253551s)
--- PASS: TestNetworkPlugins/group/false/Start (79.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-hjtj5" [644f2f1d-3762-42b6-bdc2-518a2778c3da] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005133577s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-262175 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-262175 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-qzvcw" [44bb328e-0978-48b2-955f-49c23bf06592] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-qzvcw" [44bb328e-0978-48b2-955f-49c23bf06592] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004446347s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-262175 exec deployment/netcat -- nslookup kubernetes.default
E0701 13:01:54.375716  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/skaffold-459115/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/calico/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-262175 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-262175 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-262175 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-262175 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-4twp6" [47a7adab-6681-44e1-aa21-460f484d749b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-4twp6" [47a7adab-6681-44e1-aa21-460f484d749b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004965931s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (82.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-262175 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-262175 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 : (1m22.691579591s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (82.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-262175 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-262175 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-262175 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (99.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-262175 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-262175 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 : (1m39.642861761s)
--- PASS: TestNetworkPlugins/group/flannel/Start (99.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (108.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-262175 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 
E0701 13:02:32.708107  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/gvisor-264306/client.crt: no such file or directory
E0701 13:02:33.348546  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/gvisor-264306/client.crt: no such file or directory
E0701 13:02:34.629576  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/gvisor-264306/client.crt: no such file or directory
E0701 13:02:37.190674  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/gvisor-264306/client.crt: no such file or directory
E0701 13:02:42.310871  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/gvisor-264306/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-262175 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 : (1m48.590529845s)
--- PASS: TestNetworkPlugins/group/bridge/Start (108.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-262175 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-262175 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-hxszs" [cafcdf78-8db4-4e4d-a808-331b40136384] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0701 13:02:52.551140  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/gvisor-264306/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-hxszs" [cafcdf78-8db4-4e4d-a808-331b40136384] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.006063238s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-262175 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-262175 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-262175 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (127.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-262175 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 
E0701 13:03:22.863258  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/addons-877411/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-262175 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 : (2m7.875690426s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (127.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-262175 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-262175 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-228fj" [0991b7ea-be2d-4694-86c1-fa257edf64e6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-228fj" [0991b7ea-be2d-4694-86c1-fa257edf64e6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.005794254s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-262175 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-262175 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-262175 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (167.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-169837 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-169837 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0: (2m47.454431973s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (167.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-vs45c" [8a0c770c-bfd6-42a0-a4d4-7db84327e4d6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00560576s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-262175 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (13.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-262175 replace --force -f testdata/netcat-deployment.yaml
E0701 13:04:11.880700  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/functional-377045/client.crt: no such file or directory
net_test.go:149: (dbg) Done: kubectl --context flannel-262175 replace --force -f testdata/netcat-deployment.yaml: (1.577312079s)
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-xzjbt" [5bc1f212-37fa-4038-95fa-62edba9045ae] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-xzjbt" [5bc1f212-37fa-4038-95fa-62edba9045ae] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.004507813s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (13.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-262175 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-262175 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-hxvml" [8e197bcb-740e-4448-a533-499e52de964f] Pending
helpers_test.go:344: "netcat-6bc787d567-hxvml" [8e197bcb-740e-4448-a533-499e52de964f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-hxvml" [8e197bcb-740e-4448-a533-499e52de964f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.00461894s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-262175 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-262175 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-262175 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-262175 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-262175 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-262175 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (86.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-392617 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.30.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-392617 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.30.2: (1m26.485155205s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (86.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (103.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-234666 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.30.2
E0701 13:05:15.913802  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/gvisor-264306/client.crt: no such file or directory
E0701 13:05:18.498741  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/auto-262175/client.crt: no such file or directory
E0701 13:05:18.504048  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/auto-262175/client.crt: no such file or directory
E0701 13:05:18.514433  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/auto-262175/client.crt: no such file or directory
E0701 13:05:18.534786  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/auto-262175/client.crt: no such file or directory
E0701 13:05:18.575845  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/auto-262175/client.crt: no such file or directory
E0701 13:05:18.656488  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/auto-262175/client.crt: no such file or directory
E0701 13:05:18.817727  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/auto-262175/client.crt: no such file or directory
E0701 13:05:19.138622  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/auto-262175/client.crt: no such file or directory
E0701 13:05:19.779209  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/auto-262175/client.crt: no such file or directory
E0701 13:05:21.059604  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/auto-262175/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-234666 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.30.2: (1m43.534226642s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (103.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-262175 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (11.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-262175 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-zjt5m" [98bbf041-d72a-4f68-873d-ed409fd7cd96] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0701 13:05:23.619886  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/auto-262175/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-zjt5m" [98bbf041-d72a-4f68-873d-ed409fd7cd96] Running
E0701 13:05:28.740170  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/auto-262175/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 11.005156465s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (11.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-262175 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-262175 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-262175 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.15s)
E0701 13:12:32.070469  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/gvisor-264306/client.crt: no such file or directory
E0701 13:12:32.328438  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/custom-flannel-262175/client.crt: no such file or directory
E0701 13:12:47.632923  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/false-262175/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (74.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-940378 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.30.2
E0701 13:05:53.021519  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/kindnet-262175/client.crt: no such file or directory
E0701 13:05:53.662338  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/kindnet-262175/client.crt: no such file or directory
E0701 13:05:54.943264  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/kindnet-262175/client.crt: no such file or directory
E0701 13:05:57.504406  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/kindnet-262175/client.crt: no such file or directory
E0701 13:05:59.461490  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/auto-262175/client.crt: no such file or directory
E0701 13:06:02.625236  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/kindnet-262175/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-940378 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.30.2: (1m14.171625361s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (74.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-392617 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1cf4d4b9-dd3d-484f-bafe-bd2843fce711] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1cf4d4b9-dd3d-484f-bafe-bd2843fce711] Running
E0701 13:06:12.865537  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/kindnet-262175/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.005258456s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-392617 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-392617 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-392617 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (14.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-392617 --alsologtostderr -v=3
E0701 13:06:33.346041  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/kindnet-262175/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-392617 --alsologtostderr -v=3: (14.342076122s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (14.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-392617 -n no-preload-392617
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-392617 -n no-preload-392617: exit status 7 (76.862057ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-392617 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (303.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-392617 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.30.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-392617 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.30.2: (5m3.080448796s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-392617 -n no-preload-392617
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (303.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-234666 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [452aa9c0-a88f-42ec-b933-e2cf52c0de37] Pending
helpers_test.go:344: "busybox" [452aa9c0-a88f-42ec-b933-e2cf52c0de37] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0701 13:06:36.760426  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/calico-262175/client.crt: no such file or directory
E0701 13:06:36.765787  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/calico-262175/client.crt: no such file or directory
E0701 13:06:36.776151  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/calico-262175/client.crt: no such file or directory
E0701 13:06:36.796498  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/calico-262175/client.crt: no such file or directory
E0701 13:06:36.836845  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/calico-262175/client.crt: no such file or directory
E0701 13:06:36.917002  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/calico-262175/client.crt: no such file or directory
E0701 13:06:37.077312  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/calico-262175/client.crt: no such file or directory
E0701 13:06:37.398520  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/calico-262175/client.crt: no such file or directory
E0701 13:06:38.039274  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/calico-262175/client.crt: no such file or directory
helpers_test.go:344: "busybox" [452aa9c0-a88f-42ec-b933-e2cf52c0de37] Running
E0701 13:06:39.320315  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/calico-262175/client.crt: no such file or directory
E0701 13:06:40.422204  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/auto-262175/client.crt: no such file or directory
E0701 13:06:41.880862  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/calico-262175/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.005154932s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-234666 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-234666 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-234666 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-234666 --alsologtostderr -v=3
E0701 13:06:47.001698  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/calico-262175/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-234666 --alsologtostderr -v=3: (13.376527111s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-169837 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [821a8826-aafa-4f88-9c40-8ba7068bf4ef] Pending
helpers_test.go:344: "busybox" [821a8826-aafa-4f88-9c40-8ba7068bf4ef] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [821a8826-aafa-4f88-9c40-8ba7068bf4ef] Running
E0701 13:06:54.375716  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/skaffold-459115/client.crt: no such file or directory
E0701 13:06:57.242929  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/calico-262175/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.005343825s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-169837 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-234666 -n embed-certs-234666
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-234666 -n embed-certs-234666: exit status 7 (75.721357ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-234666 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (298.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-234666 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.30.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-234666 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.30.2: (4m58.383586686s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-234666 -n embed-certs-234666
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (298.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-169837 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-169837 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.173327871s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-169837 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-169837 --alsologtostderr -v=3
E0701 13:07:04.643296  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/custom-flannel-262175/client.crt: no such file or directory
E0701 13:07:04.648648  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/custom-flannel-262175/client.crt: no such file or directory
E0701 13:07:04.658984  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/custom-flannel-262175/client.crt: no such file or directory
E0701 13:07:04.679371  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/custom-flannel-262175/client.crt: no such file or directory
E0701 13:07:04.720535  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/custom-flannel-262175/client.crt: no such file or directory
E0701 13:07:04.800915  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/custom-flannel-262175/client.crt: no such file or directory
E0701 13:07:04.961585  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/custom-flannel-262175/client.crt: no such file or directory
E0701 13:07:05.282193  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/custom-flannel-262175/client.crt: no such file or directory
E0701 13:07:05.922878  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/custom-flannel-262175/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-169837 --alsologtostderr -v=3: (12.668165868s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.67s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-940378 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9a53c4a5-840e-43a4-b8ac-4241df7d46f4] Pending
E0701 13:07:07.203633  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/custom-flannel-262175/client.crt: no such file or directory
helpers_test.go:344: "busybox" [9a53c4a5-840e-43a4-b8ac-4241df7d46f4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0701 13:07:09.764166  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/custom-flannel-262175/client.crt: no such file or directory
helpers_test.go:344: "busybox" [9a53c4a5-840e-43a4-b8ac-4241df7d46f4] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004531452s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-940378 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-169837 -n old-k8s-version-169837
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-169837 -n old-k8s-version-169837: exit status 7 (68.020048ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-169837 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (415.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-169837 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0
E0701 13:07:14.306556  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/kindnet-262175/client.crt: no such file or directory
E0701 13:07:14.884876  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/custom-flannel-262175/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-169837 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0: (6m55.359354907s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-169837 -n old-k8s-version-169837
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (415.61s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-940378 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-940378 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-940378 --alsologtostderr -v=3
E0701 13:07:17.723949  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/calico-262175/client.crt: no such file or directory
E0701 13:07:25.125511  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/custom-flannel-262175/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-940378 --alsologtostderr -v=3: (13.344908634s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-940378 -n default-k8s-diff-port-940378
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-940378 -n default-k8s-diff-port-940378: exit status 7 (105.797723ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-940378 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (340.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-940378 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.30.2
E0701 13:07:32.070430  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/gvisor-264306/client.crt: no such file or directory
E0701 13:07:45.605968  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/custom-flannel-262175/client.crt: no such file or directory
E0701 13:07:47.632315  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/false-262175/client.crt: no such file or directory
E0701 13:07:47.637633  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/false-262175/client.crt: no such file or directory
E0701 13:07:47.647970  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/false-262175/client.crt: no such file or directory
E0701 13:07:47.668269  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/false-262175/client.crt: no such file or directory
E0701 13:07:47.708937  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/false-262175/client.crt: no such file or directory
E0701 13:07:47.789284  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/false-262175/client.crt: no such file or directory
E0701 13:07:47.949802  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/false-262175/client.crt: no such file or directory
E0701 13:07:48.270090  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/false-262175/client.crt: no such file or directory
E0701 13:07:48.910769  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/false-262175/client.crt: no such file or directory
E0701 13:07:50.191611  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/false-262175/client.crt: no such file or directory
E0701 13:07:52.752612  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/false-262175/client.crt: no such file or directory
E0701 13:07:57.872931  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/false-262175/client.crt: no such file or directory
E0701 13:07:58.684838  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/calico-262175/client.crt: no such file or directory
E0701 13:07:59.754490  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/gvisor-264306/client.crt: no such file or directory
E0701 13:08:02.342900  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/auto-262175/client.crt: no such file or directory
E0701 13:08:08.113135  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/false-262175/client.crt: no such file or directory
E0701 13:08:22.863106  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/addons-877411/client.crt: no such file or directory
E0701 13:08:26.566705  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/custom-flannel-262175/client.crt: no such file or directory
E0701 13:08:28.594097  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/false-262175/client.crt: no such file or directory
E0701 13:08:35.925402  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/enable-default-cni-262175/client.crt: no such file or directory
E0701 13:08:35.930756  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/enable-default-cni-262175/client.crt: no such file or directory
E0701 13:08:35.941161  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/enable-default-cni-262175/client.crt: no such file or directory
E0701 13:08:35.961481  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/enable-default-cni-262175/client.crt: no such file or directory
E0701 13:08:36.001847  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/enable-default-cni-262175/client.crt: no such file or directory
E0701 13:08:36.082983  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/enable-default-cni-262175/client.crt: no such file or directory
E0701 13:08:36.227558  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/kindnet-262175/client.crt: no such file or directory
E0701 13:08:36.243772  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/enable-default-cni-262175/client.crt: no such file or directory
E0701 13:08:36.564717  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/enable-default-cni-262175/client.crt: no such file or directory
E0701 13:08:37.205535  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/enable-default-cni-262175/client.crt: no such file or directory
E0701 13:08:38.486043  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/enable-default-cni-262175/client.crt: no such file or directory
E0701 13:08:41.046762  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/enable-default-cni-262175/client.crt: no such file or directory
E0701 13:08:46.167039  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/enable-default-cni-262175/client.crt: no such file or directory
E0701 13:08:54.928092  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/functional-377045/client.crt: no such file or directory
E0701 13:08:56.407314  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/enable-default-cni-262175/client.crt: no such file or directory
E0701 13:09:04.074135  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/flannel-262175/client.crt: no such file or directory
E0701 13:09:04.079459  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/flannel-262175/client.crt: no such file or directory
E0701 13:09:04.089800  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/flannel-262175/client.crt: no such file or directory
E0701 13:09:04.110242  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/flannel-262175/client.crt: no such file or directory
E0701 13:09:04.150580  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/flannel-262175/client.crt: no such file or directory
E0701 13:09:04.230885  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/flannel-262175/client.crt: no such file or directory
E0701 13:09:04.391545  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/flannel-262175/client.crt: no such file or directory
E0701 13:09:04.712212  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/flannel-262175/client.crt: no such file or directory
E0701 13:09:05.352948  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/flannel-262175/client.crt: no such file or directory
E0701 13:09:06.633803  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/flannel-262175/client.crt: no such file or directory
E0701 13:09:09.194857  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/flannel-262175/client.crt: no such file or directory
E0701 13:09:09.555218  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/false-262175/client.crt: no such file or directory
E0701 13:09:11.880886  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/functional-377045/client.crt: no such file or directory
E0701 13:09:14.315042  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/flannel-262175/client.crt: no such file or directory
E0701 13:09:16.887565  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/enable-default-cni-262175/client.crt: no such file or directory
E0701 13:09:20.605103  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/calico-262175/client.crt: no such file or directory
E0701 13:09:21.907330  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/bridge-262175/client.crt: no such file or directory
E0701 13:09:21.912627  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/bridge-262175/client.crt: no such file or directory
E0701 13:09:21.922914  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/bridge-262175/client.crt: no such file or directory
E0701 13:09:21.943273  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/bridge-262175/client.crt: no such file or directory
E0701 13:09:21.983523  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/bridge-262175/client.crt: no such file or directory
E0701 13:09:22.063902  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/bridge-262175/client.crt: no such file or directory
E0701 13:09:22.224716  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/bridge-262175/client.crt: no such file or directory
E0701 13:09:22.545705  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/bridge-262175/client.crt: no such file or directory
E0701 13:09:23.186904  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/bridge-262175/client.crt: no such file or directory
E0701 13:09:24.467176  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/bridge-262175/client.crt: no such file or directory
E0701 13:09:24.555396  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/flannel-262175/client.crt: no such file or directory
E0701 13:09:27.027379  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/bridge-262175/client.crt: no such file or directory
E0701 13:09:32.148113  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/bridge-262175/client.crt: no such file or directory
E0701 13:09:42.388733  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/bridge-262175/client.crt: no such file or directory
E0701 13:09:45.036226  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/flannel-262175/client.crt: no such file or directory
E0701 13:09:48.487903  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/custom-flannel-262175/client.crt: no such file or directory
E0701 13:09:57.848772  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/enable-default-cni-262175/client.crt: no such file or directory
E0701 13:10:02.869150  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/bridge-262175/client.crt: no such file or directory
E0701 13:10:18.498167  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/auto-262175/client.crt: no such file or directory
E0701 13:10:23.412258  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/kubenet-262175/client.crt: no such file or directory
E0701 13:10:23.417603  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/kubenet-262175/client.crt: no such file or directory
E0701 13:10:23.427890  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/kubenet-262175/client.crt: no such file or directory
E0701 13:10:23.448199  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/kubenet-262175/client.crt: no such file or directory
E0701 13:10:23.488527  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/kubenet-262175/client.crt: no such file or directory
E0701 13:10:23.568930  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/kubenet-262175/client.crt: no such file or directory
E0701 13:10:23.729402  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/kubenet-262175/client.crt: no such file or directory
E0701 13:10:24.049988  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/kubenet-262175/client.crt: no such file or directory
E0701 13:10:24.691209  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/kubenet-262175/client.crt: no such file or directory
E0701 13:10:25.971730  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/kubenet-262175/client.crt: no such file or directory
E0701 13:10:25.996993  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/flannel-262175/client.crt: no such file or directory
E0701 13:10:28.532968  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/kubenet-262175/client.crt: no such file or directory
E0701 13:10:31.476120  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/false-262175/client.crt: no such file or directory
E0701 13:10:33.653903  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/kubenet-262175/client.crt: no such file or directory
E0701 13:10:43.830158  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/bridge-262175/client.crt: no such file or directory
E0701 13:10:43.894358  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/kubenet-262175/client.crt: no such file or directory
E0701 13:10:46.183569  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/auto-262175/client.crt: no such file or directory
E0701 13:10:52.382636  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/kindnet-262175/client.crt: no such file or directory
E0701 13:11:04.374703  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/kubenet-262175/client.crt: no such file or directory
E0701 13:11:19.768968  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/enable-default-cni-262175/client.crt: no such file or directory
E0701 13:11:20.068406  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/kindnet-262175/client.crt: no such file or directory
E0701 13:11:36.759576  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/calico-262175/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-940378 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.30.2: (5m40.149696286s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-940378 -n default-k8s-diff-port-940378
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (340.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-hdbvh" [27b672d5-db8c-4280-a3fc-837451d579d7] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005200409s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-hdbvh" [27b672d5-db8c-4280-a3fc-837451d579d7] Running
E0701 13:11:45.335273  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/kubenet-262175/client.crt: no such file or directory
E0701 13:11:47.917762  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/flannel-262175/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004523298s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-392617 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-392617 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.5s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-392617 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-392617 -n no-preload-392617
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-392617 -n no-preload-392617: exit status 2 (242.854533ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-392617 -n no-preload-392617
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-392617 -n no-preload-392617: exit status 2 (238.956708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-392617 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-392617 -n no-preload-392617
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-392617 -n no-preload-392617
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.50s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (68.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-548690 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.30.2
E0701 13:11:54.376122  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/skaffold-459115/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-548690 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.30.2: (1m8.26185889s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (68.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-zvhh6" [db9d8060-2eb0-4776-a40e-51dd9fd35937] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004340801s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-zvhh6" [db9d8060-2eb0-4776-a40e-51dd9fd35937] Running
E0701 13:12:04.445697  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/calico-262175/client.crt: no such file or directory
E0701 13:12:04.643148  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/custom-flannel-262175/client.crt: no such file or directory
E0701 13:12:05.750482  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/bridge-262175/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004946608s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-234666 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-234666 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.57s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-234666 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-234666 -n embed-certs-234666
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-234666 -n embed-certs-234666: exit status 2 (246.723618ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-234666 -n embed-certs-234666
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-234666 -n embed-certs-234666: exit status 2 (240.443972ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-234666 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-234666 -n embed-certs-234666
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-234666 -n embed-certs-234666
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.57s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-548690 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-548690 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.013437662s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.69s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-548690 --alsologtostderr -v=3
E0701 13:13:07.255961  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/kubenet-262175/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-548690 --alsologtostderr -v=3: (7.685318913s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.69s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-548690 -n newest-cni-548690
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-548690 -n newest-cni-548690: exit status 7 (68.90913ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-548690 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (39.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-548690 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.30.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-548690 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.30.2: (39.610287588s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-548690 -n newest-cni-548690
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (39.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-6b477" [6bb6e6d7-c29a-40e4-8e1f-2fde51825fd9] Running
E0701 13:13:15.316739  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/false-262175/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004098057s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-6b477" [6bb6e6d7-c29a-40e4-8e1f-2fde51825fd9] Running
E0701 13:13:17.421728  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/skaffold-459115/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003804542s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-940378 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-940378 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-940378 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-940378 -n default-k8s-diff-port-940378
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-940378 -n default-k8s-diff-port-940378: exit status 2 (234.933075ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-940378 -n default-k8s-diff-port-940378
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-940378 -n default-k8s-diff-port-940378: exit status 2 (237.665431ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-940378 --alsologtostderr -v=1
E0701 13:13:22.862885  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/addons-877411/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-940378 -n default-k8s-diff-port-940378
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-940378 -n default-k8s-diff-port-940378
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-548690 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-548690 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-548690 -n newest-cni-548690
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-548690 -n newest-cni-548690: exit status 2 (228.000501ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-548690 -n newest-cni-548690
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-548690 -n newest-cni-548690: exit status 2 (228.022751ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-548690 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-548690 -n newest-cni-548690
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-548690 -n newest-cni-548690
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-btdwg" [c7118b2d-a79c-4b40-9cff-aac2e66739c4] Running
E0701 13:14:11.880887  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/functional-377045/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004126s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-btdwg" [c7118b2d-a79c-4b40-9cff-aac2e66739c4] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003518532s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-169837 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-169837 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-169837 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-169837 -n old-k8s-version-169837
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-169837 -n old-k8s-version-169837: exit status 2 (237.141417ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-169837 -n old-k8s-version-169837
E0701 13:14:21.907147  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/bridge-262175/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-169837 -n old-k8s-version-169837: exit status 2 (235.912225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-169837 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-169837 -n old-k8s-version-169837
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-169837 -n old-k8s-version-169837
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.25s)

                                                
                                    

Test skip (31/341)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-262175 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-262175

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-262175

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-262175

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-262175

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-262175

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-262175

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-262175

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-262175

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-262175

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-262175

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-262175" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262175"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-262175" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262175"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-262175" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262175"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-262175

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-262175" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262175"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-262175" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262175"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-262175" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-262175" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-262175" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-262175" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-262175" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-262175" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-262175" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-262175" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-262175" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262175"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-262175" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262175"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-262175" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262175"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-262175" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262175"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-262175" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262175"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-262175

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-262175

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-262175" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-262175" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-262175

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-262175

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-262175" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-262175" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-262175" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-262175" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-262175" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-262175" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262175"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-262175" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262175"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-262175" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262175"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-262175" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262175"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-262175" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262175"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-262175

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-262175" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262175"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-262175" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262175"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-262175" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262175"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-262175" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262175"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-262175" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262175"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-262175" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262175"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-262175" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262175"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-262175" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262175"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-262175" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262175"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-262175" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262175"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-262175" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262175"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-262175" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262175"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-262175" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262175"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-262175" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262175"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-262175" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262175"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-262175" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262175"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-262175" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262175"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-262175" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262175"

                                                
                                                
----------------------- debugLogs end: cilium-262175 [took: 3.545317175s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-262175" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-262175
--- SKIP: TestNetworkPlugins/group/cilium (3.70s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.78s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-239286" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-239286
E0701 13:05:52.383149  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/kindnet-262175/client.crt: no such file or directory
E0701 13:05:52.388515  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/kindnet-262175/client.crt: no such file or directory
E0701 13:05:52.398820  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/kindnet-262175/client.crt: no such file or directory
E0701 13:05:52.419174  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/kindnet-262175/client.crt: no such file or directory
E0701 13:05:52.459527  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/kindnet-262175/client.crt: no such file or directory
E0701 13:05:52.539877  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/kindnet-262175/client.crt: no such file or directory
E0701 13:05:52.700707  637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/kindnet-262175/client.crt: no such file or directory
--- SKIP: TestStartStop/group/disable-driver-mounts (0.78s)

                                                
                                    
Copied to clipboard