Test Report: KVM_Linux 20427

                    
                      a480bdc5e776ed1bdb04039eceacb0c7aced7f2e:2025-02-17:38392
                    
                

Test fail (4/344)

Order failed test Duration
176 TestMultiControlPlane/serial/RestartCluster 112.44
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 1.75
178 TestMultiControlPlane/serial/AddSecondaryNode 1.6
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.75
x
+
TestMultiControlPlane/serial/RestartCluster (112.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-783738 --wait=true -v=7 --alsologtostderr --driver=kvm2 
E0217 11:58:34.519547   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/addons-603759/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p ha-783738 --wait=true -v=7 --alsologtostderr --driver=kvm2 : exit status 90 (1m50.94698712s)

                                                
                                                
-- stdout --
	* [ha-783738] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20427
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20427-77349/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20427-77349/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "ha-783738" primary control-plane node in "ha-783738" cluster
	* Restarting existing kvm2 VM for "ha-783738" ...
	* Preparing Kubernetes v1.32.1 on Docker 27.4.0 ...
	* Enabled addons: 
	
	* Starting "ha-783738-m02" control-plane node in "ha-783738" cluster
	* Restarting existing kvm2 VM for "ha-783738-m02" ...
	* Found network options:
	  - NO_PROXY=192.168.39.249
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0217 11:56:50.215291  100380 out.go:345] Setting OutFile to fd 1 ...
	I0217 11:56:50.215609  100380 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 11:56:50.215619  100380 out.go:358] Setting ErrFile to fd 2...
	I0217 11:56:50.215624  100380 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 11:56:50.215819  100380 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20427-77349/.minikube/bin
	I0217 11:56:50.216353  100380 out.go:352] Setting JSON to false
	I0217 11:56:50.217237  100380 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":5958,"bootTime":1739787452,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0217 11:56:50.217362  100380 start.go:139] virtualization: kvm guest
	I0217 11:56:50.219910  100380 out.go:177] * [ha-783738] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0217 11:56:50.221323  100380 out.go:177]   - MINIKUBE_LOCATION=20427
	I0217 11:56:50.221334  100380 notify.go:220] Checking for updates...
	I0217 11:56:50.223835  100380 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0217 11:56:50.224954  100380 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20427-77349/kubeconfig
	I0217 11:56:50.226180  100380 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20427-77349/.minikube
	I0217 11:56:50.227361  100380 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0217 11:56:50.228473  100380 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0217 11:56:50.229885  100380 config.go:182] Loaded profile config "ha-783738": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0217 11:56:50.230261  100380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0217 11:56:50.230308  100380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0217 11:56:50.245239  100380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46091
	I0217 11:56:50.245761  100380 main.go:141] libmachine: () Calling .GetVersion
	I0217 11:56:50.246359  100380 main.go:141] libmachine: Using API Version  1
	I0217 11:56:50.246382  100380 main.go:141] libmachine: () Calling .SetConfigRaw
	I0217 11:56:50.246775  100380 main.go:141] libmachine: () Calling .GetMachineName
	I0217 11:56:50.246962  100380 main.go:141] libmachine: (ha-783738) Calling .DriverName
	I0217 11:56:50.247230  100380 driver.go:394] Setting default libvirt URI to qemu:///system
	I0217 11:56:50.247538  100380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0217 11:56:50.247594  100380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0217 11:56:50.262713  100380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36011
	I0217 11:56:50.263097  100380 main.go:141] libmachine: () Calling .GetVersion
	I0217 11:56:50.263692  100380 main.go:141] libmachine: Using API Version  1
	I0217 11:56:50.263752  100380 main.go:141] libmachine: () Calling .SetConfigRaw
	I0217 11:56:50.264059  100380 main.go:141] libmachine: () Calling .GetMachineName
	I0217 11:56:50.264289  100380 main.go:141] libmachine: (ha-783738) Calling .DriverName
	I0217 11:56:50.297981  100380 out.go:177] * Using the kvm2 driver based on existing profile
	I0217 11:56:50.299143  100380 start.go:297] selected driver: kvm2
	I0217 11:56:50.299155  100380 start.go:901] validating driver "kvm2" against &{Name:ha-783738 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-78
3738 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.31 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.168 Port:0 KubernetesVersion:v1.32.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0217 11:56:50.299304  100380 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0217 11:56:50.299646  100380 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0217 11:56:50.299706  100380 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20427-77349/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0217 11:56:50.314229  100380 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0217 11:56:50.314917  100380 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0217 11:56:50.314949  100380 cni.go:84] Creating CNI manager for ""
	I0217 11:56:50.315000  100380 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0217 11:56:50.315060  100380 start.go:340] cluster config:
	{Name:ha-783738 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-783738 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.31 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.168 Port:0 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kub
eflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0217 11:56:50.315190  100380 iso.go:125] acquiring lock: {Name:mk4380b7bda8fcd8bced9705ff1695c3fb7dac0d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0217 11:56:50.317519  100380 out.go:177] * Starting "ha-783738" primary control-plane node in "ha-783738" cluster
	I0217 11:56:50.318547  100380 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0217 11:56:50.318578  100380 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20427-77349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4
	I0217 11:56:50.318588  100380 cache.go:56] Caching tarball of preloaded images
	I0217 11:56:50.318681  100380 preload.go:172] Found /home/jenkins/minikube-integration/20427-77349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0217 11:56:50.318695  100380 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on docker
	I0217 11:56:50.318829  100380 profile.go:143] Saving config to /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/config.json ...
	I0217 11:56:50.319009  100380 start.go:360] acquireMachinesLock for ha-783738: {Name:mk05ba8323ae77ab7dcc14c378d65810d956fdc0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0217 11:56:50.319055  100380 start.go:364] duration metric: took 23.519µs to acquireMachinesLock for "ha-783738"
	I0217 11:56:50.319080  100380 start.go:96] Skipping create...Using existing machine configuration
	I0217 11:56:50.319088  100380 fix.go:54] fixHost starting: 
	I0217 11:56:50.319353  100380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0217 11:56:50.319391  100380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0217 11:56:50.333761  100380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34803
	I0217 11:56:50.334152  100380 main.go:141] libmachine: () Calling .GetVersion
	I0217 11:56:50.334693  100380 main.go:141] libmachine: Using API Version  1
	I0217 11:56:50.334714  100380 main.go:141] libmachine: () Calling .SetConfigRaw
	I0217 11:56:50.335000  100380 main.go:141] libmachine: () Calling .GetMachineName
	I0217 11:56:50.335210  100380 main.go:141] libmachine: (ha-783738) Calling .DriverName
	I0217 11:56:50.335347  100380 main.go:141] libmachine: (ha-783738) Calling .GetState
	I0217 11:56:50.336730  100380 fix.go:112] recreateIfNeeded on ha-783738: state=Stopped err=<nil>
	I0217 11:56:50.336752  100380 main.go:141] libmachine: (ha-783738) Calling .DriverName
	W0217 11:56:50.336864  100380 fix.go:138] unexpected machine state, will restart: <nil>
	I0217 11:56:50.338814  100380 out.go:177] * Restarting existing kvm2 VM for "ha-783738" ...
	I0217 11:56:50.340020  100380 main.go:141] libmachine: (ha-783738) Calling .Start
	I0217 11:56:50.340200  100380 main.go:141] libmachine: (ha-783738) starting domain...
	I0217 11:56:50.340221  100380 main.go:141] libmachine: (ha-783738) ensuring networks are active...
	I0217 11:56:50.340845  100380 main.go:141] libmachine: (ha-783738) Ensuring network default is active
	I0217 11:56:50.341268  100380 main.go:141] libmachine: (ha-783738) Ensuring network mk-ha-783738 is active
	I0217 11:56:50.341612  100380 main.go:141] libmachine: (ha-783738) getting domain XML...
	I0217 11:56:50.342286  100380 main.go:141] libmachine: (ha-783738) creating domain...
	I0217 11:56:51.533335  100380 main.go:141] libmachine: (ha-783738) waiting for IP...
	I0217 11:56:51.534198  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:56:51.534571  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:56:51.534631  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:56:51.534554  100416 retry.go:31] will retry after 214.112758ms: waiting for domain to come up
	I0217 11:56:51.750038  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:56:51.750535  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:56:51.750587  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:56:51.750528  100416 retry.go:31] will retry after 287.575076ms: waiting for domain to come up
	I0217 11:56:52.040019  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:56:52.040473  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:56:52.040515  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:56:52.040452  100416 retry.go:31] will retry after 303.389275ms: waiting for domain to come up
	I0217 11:56:52.345057  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:56:52.345400  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:56:52.345452  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:56:52.345383  100416 retry.go:31] will retry after 580.610288ms: waiting for domain to come up
	I0217 11:56:52.927102  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:56:52.927623  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:56:52.927663  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:56:52.927596  100416 retry.go:31] will retry after 470.88869ms: waiting for domain to come up
	I0217 11:56:53.400293  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:56:53.400698  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:56:53.400725  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:56:53.400636  100416 retry.go:31] will retry after 645.102407ms: waiting for domain to come up
	I0217 11:56:54.046798  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:56:54.047309  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:56:54.047365  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:56:54.047265  100416 retry.go:31] will retry after 993.016218ms: waiting for domain to come up
	I0217 11:56:55.041450  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:56:55.041808  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:56:55.041828  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:56:55.041790  100416 retry.go:31] will retry after 1.096274529s: waiting for domain to come up
	I0217 11:56:56.139475  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:56:56.139892  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:56:56.139957  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:56:56.139882  100416 retry.go:31] will retry after 1.840421804s: waiting for domain to come up
	I0217 11:56:57.981618  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:56:57.982040  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:56:57.982068  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:56:57.981979  100416 retry.go:31] will retry after 1.8969141s: waiting for domain to come up
	I0217 11:56:59.881026  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:56:59.881535  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:56:59.881570  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:56:59.881471  100416 retry.go:31] will retry after 1.890240518s: waiting for domain to come up
	I0217 11:57:01.773274  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:01.773728  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:57:01.773779  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:57:01.773696  100416 retry.go:31] will retry after 3.046762911s: waiting for domain to come up
	I0217 11:57:04.823999  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:04.824458  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:57:04.824497  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:57:04.824453  100416 retry.go:31] will retry after 3.819063496s: waiting for domain to come up
	I0217 11:57:08.647831  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:08.648309  100380 main.go:141] libmachine: (ha-783738) found domain IP: 192.168.39.249
	I0217 11:57:08.648334  100380 main.go:141] libmachine: (ha-783738) reserving static IP address...
	I0217 11:57:08.648347  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has current primary IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:08.648799  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "ha-783738", mac: "52:54:00:fb:6f:65", ip: "192.168.39.249"} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:08.648824  100380 main.go:141] libmachine: (ha-783738) DBG | skip adding static IP to network mk-ha-783738 - found existing host DHCP lease matching {name: "ha-783738", mac: "52:54:00:fb:6f:65", ip: "192.168.39.249"}
	I0217 11:57:08.648835  100380 main.go:141] libmachine: (ha-783738) reserved static IP address 192.168.39.249 for domain ha-783738
	I0217 11:57:08.648846  100380 main.go:141] libmachine: (ha-783738) waiting for SSH...
	I0217 11:57:08.648862  100380 main.go:141] libmachine: (ha-783738) DBG | Getting to WaitForSSH function...
	I0217 11:57:08.650828  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:08.651193  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:08.651224  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:08.651387  100380 main.go:141] libmachine: (ha-783738) DBG | Using SSH client type: external
	I0217 11:57:08.651414  100380 main.go:141] libmachine: (ha-783738) DBG | Using SSH private key: /home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738/id_rsa (-rw-------)
	I0217 11:57:08.651435  100380 main.go:141] libmachine: (ha-783738) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.249 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0217 11:57:08.651464  100380 main.go:141] libmachine: (ha-783738) DBG | About to run SSH command:
	I0217 11:57:08.651480  100380 main.go:141] libmachine: (ha-783738) DBG | exit 0
	I0217 11:57:08.776922  100380 main.go:141] libmachine: (ha-783738) DBG | SSH cmd err, output: <nil>: 
	I0217 11:57:08.777326  100380 main.go:141] libmachine: (ha-783738) Calling .GetConfigRaw
	I0217 11:57:08.777959  100380 main.go:141] libmachine: (ha-783738) Calling .GetIP
	I0217 11:57:08.780301  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:08.780692  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:08.780735  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:08.780948  100380 profile.go:143] Saving config to /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/config.json ...
	I0217 11:57:08.781137  100380 machine.go:93] provisionDockerMachine start ...
	I0217 11:57:08.781154  100380 main.go:141] libmachine: (ha-783738) Calling .DriverName
	I0217 11:57:08.781442  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:08.783478  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:08.783868  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:08.783897  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:08.784048  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHPort
	I0217 11:57:08.784237  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:08.784393  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:08.784570  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHUsername
	I0217 11:57:08.784738  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:08.784917  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0217 11:57:08.784928  100380 main.go:141] libmachine: About to run SSH command:
	hostname
	I0217 11:57:08.889484  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0217 11:57:08.889525  100380 main.go:141] libmachine: (ha-783738) Calling .GetMachineName
	I0217 11:57:08.889783  100380 buildroot.go:166] provisioning hostname "ha-783738"
	I0217 11:57:08.889818  100380 main.go:141] libmachine: (ha-783738) Calling .GetMachineName
	I0217 11:57:08.890003  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:08.892666  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:08.893027  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:08.893060  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:08.893202  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHPort
	I0217 11:57:08.893391  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:08.893536  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:08.893661  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHUsername
	I0217 11:57:08.893787  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:08.893949  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0217 11:57:08.893960  100380 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-783738 && echo "ha-783738" | sudo tee /etc/hostname
	I0217 11:57:09.014626  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-783738
	
	I0217 11:57:09.014653  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:09.017274  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.017710  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:09.017744  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.017939  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHPort
	I0217 11:57:09.018131  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:09.018348  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:09.018473  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHUsername
	I0217 11:57:09.018701  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:09.018967  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0217 11:57:09.018994  100380 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-783738' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-783738/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-783738' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0217 11:57:09.133208  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0217 11:57:09.133247  100380 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20427-77349/.minikube CaCertPath:/home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20427-77349/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20427-77349/.minikube}
	I0217 11:57:09.133278  100380 buildroot.go:174] setting up certificates
	I0217 11:57:09.133295  100380 provision.go:84] configureAuth start
	I0217 11:57:09.133331  100380 main.go:141] libmachine: (ha-783738) Calling .GetMachineName
	I0217 11:57:09.133680  100380 main.go:141] libmachine: (ha-783738) Calling .GetIP
	I0217 11:57:09.136393  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.136746  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:09.136771  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.136918  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:09.139192  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.139545  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:09.139583  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.139699  100380 provision.go:143] copyHostCerts
	I0217 11:57:09.139734  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20427-77349/.minikube/ca.pem
	I0217 11:57:09.139786  100380 exec_runner.go:144] found /home/jenkins/minikube-integration/20427-77349/.minikube/ca.pem, removing ...
	I0217 11:57:09.139804  100380 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20427-77349/.minikube/ca.pem
	I0217 11:57:09.139883  100380 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20427-77349/.minikube/ca.pem (1082 bytes)
	I0217 11:57:09.139996  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20427-77349/.minikube/cert.pem
	I0217 11:57:09.140023  100380 exec_runner.go:144] found /home/jenkins/minikube-integration/20427-77349/.minikube/cert.pem, removing ...
	I0217 11:57:09.140030  100380 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20427-77349/.minikube/cert.pem
	I0217 11:57:09.140079  100380 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20427-77349/.minikube/cert.pem (1123 bytes)
	I0217 11:57:09.140159  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20427-77349/.minikube/key.pem
	I0217 11:57:09.140184  100380 exec_runner.go:144] found /home/jenkins/minikube-integration/20427-77349/.minikube/key.pem, removing ...
	I0217 11:57:09.140191  100380 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20427-77349/.minikube/key.pem
	I0217 11:57:09.140228  100380 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20427-77349/.minikube/key.pem (1675 bytes)
	I0217 11:57:09.140314  100380 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20427-77349/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca-key.pem org=jenkins.ha-783738 san=[127.0.0.1 192.168.39.249 ha-783738 localhost minikube]
	I0217 11:57:09.269804  100380 provision.go:177] copyRemoteCerts
	I0217 11:57:09.269900  100380 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0217 11:57:09.269935  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:09.272592  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.272916  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:09.272945  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.273095  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHPort
	I0217 11:57:09.273282  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:09.273464  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHUsername
	I0217 11:57:09.273600  100380 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738/id_rsa Username:docker}
	I0217 11:57:09.355256  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0217 11:57:09.355331  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0217 11:57:09.378132  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0217 11:57:09.378243  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0217 11:57:09.399749  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0217 11:57:09.399830  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0217 11:57:09.421183  100380 provision.go:87] duration metric: took 287.855291ms to configureAuth
	I0217 11:57:09.421207  100380 buildroot.go:189] setting minikube options for container-runtime
	I0217 11:57:09.421432  100380 config.go:182] Loaded profile config "ha-783738": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0217 11:57:09.421460  100380 main.go:141] libmachine: (ha-783738) Calling .DriverName
	I0217 11:57:09.421765  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:09.424701  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.425141  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:09.425173  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.425370  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHPort
	I0217 11:57:09.425557  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:09.425734  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:09.425883  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHUsername
	I0217 11:57:09.426059  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:09.426283  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0217 11:57:09.426297  100380 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0217 11:57:09.534976  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0217 11:57:09.535006  100380 buildroot.go:70] root file system type: tmpfs
	I0217 11:57:09.535125  100380 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0217 11:57:09.535163  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:09.537739  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.538108  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:09.538126  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.538307  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHPort
	I0217 11:57:09.538481  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:09.538662  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:09.538802  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHUsername
	I0217 11:57:09.538949  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:09.539142  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0217 11:57:09.539243  100380 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0217 11:57:09.658326  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0217 11:57:09.658371  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:09.661372  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.661838  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:09.661875  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.662085  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHPort
	I0217 11:57:09.662300  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:09.662435  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:09.662559  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHUsername
	I0217 11:57:09.662707  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:09.662897  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0217 11:57:09.662913  100380 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0217 11:57:11.588699  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0217 11:57:11.588766  100380 machine.go:96] duration metric: took 2.807616414s to provisionDockerMachine
	I0217 11:57:11.588782  100380 start.go:293] postStartSetup for "ha-783738" (driver="kvm2")
	I0217 11:57:11.588792  100380 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0217 11:57:11.588810  100380 main.go:141] libmachine: (ha-783738) Calling .DriverName
	I0217 11:57:11.589177  100380 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0217 11:57:11.589221  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:11.592192  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:11.592596  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:11.592627  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:11.592785  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHPort
	I0217 11:57:11.592979  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:11.593170  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHUsername
	I0217 11:57:11.593334  100380 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738/id_rsa Username:docker}
	I0217 11:57:11.675232  100380 ssh_runner.go:195] Run: cat /etc/os-release
	I0217 11:57:11.679319  100380 info.go:137] Remote host: Buildroot 2023.02.9
	I0217 11:57:11.679347  100380 filesync.go:126] Scanning /home/jenkins/minikube-integration/20427-77349/.minikube/addons for local assets ...
	I0217 11:57:11.679434  100380 filesync.go:126] Scanning /home/jenkins/minikube-integration/20427-77349/.minikube/files for local assets ...
	I0217 11:57:11.679553  100380 filesync.go:149] local asset: /home/jenkins/minikube-integration/20427-77349/.minikube/files/etc/ssl/certs/845022.pem -> 845022.pem in /etc/ssl/certs
	I0217 11:57:11.679569  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/files/etc/ssl/certs/845022.pem -> /etc/ssl/certs/845022.pem
	I0217 11:57:11.679700  100380 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0217 11:57:11.688596  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/files/etc/ssl/certs/845022.pem --> /etc/ssl/certs/845022.pem (1708 bytes)
	I0217 11:57:11.712948  100380 start.go:296] duration metric: took 124.147315ms for postStartSetup
	I0217 11:57:11.713041  100380 main.go:141] libmachine: (ha-783738) Calling .DriverName
	I0217 11:57:11.713388  100380 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0217 11:57:11.713431  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:11.716109  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:11.716482  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:11.716509  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:11.716697  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHPort
	I0217 11:57:11.716902  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:11.717111  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHUsername
	I0217 11:57:11.717237  100380 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738/id_rsa Username:docker}
	I0217 11:57:11.799568  100380 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0217 11:57:11.799647  100380 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0217 11:57:11.840659  100380 fix.go:56] duration metric: took 21.521561421s for fixHost
	I0217 11:57:11.840710  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:11.843711  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:11.844159  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:11.844211  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:11.844334  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHPort
	I0217 11:57:11.844538  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:11.844685  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:11.844877  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHUsername
	I0217 11:57:11.845064  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:11.845292  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0217 11:57:11.845324  100380 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0217 11:57:11.961693  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739793431.919777749
	
	I0217 11:57:11.961720  100380 fix.go:216] guest clock: 1739793431.919777749
	I0217 11:57:11.961728  100380 fix.go:229] Guest: 2025-02-17 11:57:11.919777749 +0000 UTC Remote: 2025-02-17 11:57:11.840688548 +0000 UTC m=+21.663425668 (delta=79.089201ms)
	I0217 11:57:11.961764  100380 fix.go:200] guest clock delta is within tolerance: 79.089201ms
	I0217 11:57:11.961771  100380 start.go:83] releasing machines lock for "ha-783738", held for 21.642703542s
	I0217 11:57:11.961797  100380 main.go:141] libmachine: (ha-783738) Calling .DriverName
	I0217 11:57:11.962076  100380 main.go:141] libmachine: (ha-783738) Calling .GetIP
	I0217 11:57:11.964739  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:11.965072  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:11.965098  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:11.965245  100380 main.go:141] libmachine: (ha-783738) Calling .DriverName
	I0217 11:57:11.965780  100380 main.go:141] libmachine: (ha-783738) Calling .DriverName
	I0217 11:57:11.965938  100380 main.go:141] libmachine: (ha-783738) Calling .DriverName
	I0217 11:57:11.966020  100380 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0217 11:57:11.966085  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:11.966153  100380 ssh_runner.go:195] Run: cat /version.json
	I0217 11:57:11.966182  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:11.968710  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:11.968814  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:11.969180  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:11.969211  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:11.969228  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:11.969243  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:11.969400  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHPort
	I0217 11:57:11.969505  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHPort
	I0217 11:57:11.969573  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:11.969654  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:11.969705  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHUsername
	I0217 11:57:11.969780  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHUsername
	I0217 11:57:11.969855  100380 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738/id_rsa Username:docker}
	I0217 11:57:11.969896  100380 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738/id_rsa Username:docker}
	I0217 11:57:12.070993  100380 ssh_runner.go:195] Run: systemctl --version
	I0217 11:57:12.076962  100380 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0217 11:57:12.082069  100380 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0217 11:57:12.082164  100380 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0217 11:57:12.097308  100380 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0217 11:57:12.097353  100380 start.go:495] detecting cgroup driver to use...
	I0217 11:57:12.097502  100380 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0217 11:57:12.116857  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0217 11:57:12.128177  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0217 11:57:12.139383  100380 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0217 11:57:12.139433  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0217 11:57:12.150535  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0217 11:57:12.161824  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0217 11:57:12.173075  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0217 11:57:12.184735  100380 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0217 11:57:12.196065  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0217 11:57:12.206061  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0217 11:57:12.215826  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0217 11:57:12.225719  100380 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0217 11:57:12.234589  100380 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0217 11:57:12.234644  100380 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0217 11:57:12.244581  100380 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0217 11:57:12.253602  100380 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0217 11:57:12.359116  100380 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0217 11:57:12.382906  100380 start.go:495] detecting cgroup driver to use...
	I0217 11:57:12.383010  100380 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0217 11:57:12.408300  100380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0217 11:57:12.424027  100380 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0217 11:57:12.444833  100380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0217 11:57:12.457628  100380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0217 11:57:12.470140  100380 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0217 11:57:12.497764  100380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0217 11:57:12.511071  100380 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0217 11:57:12.529141  100380 ssh_runner.go:195] Run: which cri-dockerd
	I0217 11:57:12.532846  100380 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0217 11:57:12.541895  100380 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0217 11:57:12.557198  100380 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0217 11:57:12.670128  100380 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0217 11:57:12.796263  100380 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0217 11:57:12.796399  100380 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0217 11:57:12.812229  100380 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0217 11:57:12.923350  100380 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0217 11:57:15.351609  100380 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.428206669s)
	I0217 11:57:15.351699  100380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0217 11:57:15.364852  100380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0217 11:57:15.377423  100380 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0217 11:57:15.493635  100380 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0217 11:57:15.621524  100380 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0217 11:57:15.730858  100380 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0217 11:57:15.748138  100380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0217 11:57:15.761818  100380 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0217 11:57:15.881775  100380 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0217 11:57:15.960772  100380 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0217 11:57:15.960858  100380 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0217 11:57:15.966411  100380 start.go:563] Will wait 60s for crictl version
	I0217 11:57:15.966517  100380 ssh_runner.go:195] Run: which crictl
	I0217 11:57:15.974036  100380 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0217 11:57:16.011837  100380 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0217 11:57:16.011912  100380 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0217 11:57:16.036945  100380 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0217 11:57:16.060974  100380 out.go:235] * Preparing Kubernetes v1.32.1 on Docker 27.4.0 ...
	I0217 11:57:16.061031  100380 main.go:141] libmachine: (ha-783738) Calling .GetIP
	I0217 11:57:16.063810  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:16.064255  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:16.064298  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:16.064499  100380 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0217 11:57:16.068464  100380 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0217 11:57:16.080668  100380 kubeadm.go:883] updating cluster {Name:ha-783738 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-783738 Namespace:
default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.31 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.168 Port:0 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-
gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0217 11:57:16.080804  100380 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0217 11:57:16.080849  100380 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0217 11:57:16.098890  100380 docker.go:689] Got preloaded images: -- stdout --
	kindest/kindnetd:v20250214-acbabc1a
	registry.k8s.io/kube-apiserver:v1.32.1
	registry.k8s.io/kube-scheduler:v1.32.1
	registry.k8s.io/kube-controller-manager:v1.32.1
	registry.k8s.io/kube-proxy:v1.32.1
	ghcr.io/kube-vip/kube-vip:v0.8.9
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0217 11:57:16.098911  100380 docker.go:619] Images already preloaded, skipping extraction
	I0217 11:57:16.098974  100380 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0217 11:57:16.116506  100380 docker.go:689] Got preloaded images: -- stdout --
	kindest/kindnetd:v20250214-acbabc1a
	registry.k8s.io/kube-apiserver:v1.32.1
	registry.k8s.io/kube-scheduler:v1.32.1
	registry.k8s.io/kube-controller-manager:v1.32.1
	registry.k8s.io/kube-proxy:v1.32.1
	ghcr.io/kube-vip/kube-vip:v0.8.9
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0217 11:57:16.116540  100380 cache_images.go:84] Images are preloaded, skipping loading
	I0217 11:57:16.116556  100380 kubeadm.go:934] updating node { 192.168.39.249 8443 v1.32.1 docker true true} ...
	I0217 11:57:16.116703  100380 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-783738 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.249
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:ha-783738 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0217 11:57:16.116764  100380 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0217 11:57:16.164431  100380 cni.go:84] Creating CNI manager for ""
	I0217 11:57:16.164455  100380 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0217 11:57:16.164469  100380 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0217 11:57:16.164499  100380 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.249 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-783738 NodeName:ha-783738 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.249"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.249 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0217 11:57:16.164682  100380 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.249
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-783738"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.249"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.249"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0217 11:57:16.164704  100380 kube-vip.go:115] generating kube-vip config ...
	I0217 11:57:16.164766  100380 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0217 11:57:16.178981  100380 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0217 11:57:16.179102  100380 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.9
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0217 11:57:16.179161  100380 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0217 11:57:16.189237  100380 binaries.go:44] Found k8s binaries, skipping transfer
	I0217 11:57:16.189321  100380 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0217 11:57:16.198727  100380 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I0217 11:57:16.214787  100380 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0217 11:57:16.231014  100380 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I0217 11:57:16.246729  100380 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0217 11:57:16.261779  100380 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0217 11:57:16.265453  100380 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0217 11:57:16.276521  100380 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0217 11:57:16.384249  100380 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0217 11:57:16.401291  100380 certs.go:68] Setting up /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738 for IP: 192.168.39.249
	I0217 11:57:16.401328  100380 certs.go:194] generating shared ca certs ...
	I0217 11:57:16.401350  100380 certs.go:226] acquiring lock for ca certs: {Name:mk7093571229e43ae88bf2507ccc9fd2cd05388e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0217 11:57:16.401508  100380 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20427-77349/.minikube/ca.key
	I0217 11:57:16.401544  100380 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20427-77349/.minikube/proxy-client-ca.key
	I0217 11:57:16.401555  100380 certs.go:256] generating profile certs ...
	I0217 11:57:16.401635  100380 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/client.key
	I0217 11:57:16.401660  100380 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.key.1b1cbf3b
	I0217 11:57:16.401671  100380 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.crt.1b1cbf3b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.249 192.168.39.31 192.168.39.254]
	I0217 11:57:16.475033  100380 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.crt.1b1cbf3b ...
	I0217 11:57:16.475062  100380 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.crt.1b1cbf3b: {Name:mkcae1f9f128e66451afcd5b133e6826e9862cbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0217 11:57:16.475228  100380 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.key.1b1cbf3b ...
	I0217 11:57:16.475243  100380 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.key.1b1cbf3b: {Name:mk484c481609a3c2ed473dfecb8f5468118b1367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0217 11:57:16.475330  100380 certs.go:381] copying /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.crt.1b1cbf3b -> /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.crt
	I0217 11:57:16.475492  100380 certs.go:385] copying /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.key.1b1cbf3b -> /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.key
	I0217 11:57:16.475629  100380 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/proxy-client.key
	I0217 11:57:16.475644  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0217 11:57:16.475656  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0217 11:57:16.475671  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0217 11:57:16.475699  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0217 11:57:16.475714  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0217 11:57:16.475726  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0217 11:57:16.475737  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0217 11:57:16.475748  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0217 11:57:16.475800  100380 certs.go:484] found cert: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/84502.pem (1338 bytes)
	W0217 11:57:16.475831  100380 certs.go:480] ignoring /home/jenkins/minikube-integration/20427-77349/.minikube/certs/84502_empty.pem, impossibly tiny 0 bytes
	I0217 11:57:16.475839  100380 certs.go:484] found cert: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca-key.pem (1679 bytes)
	I0217 11:57:16.475861  100380 certs.go:484] found cert: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem (1082 bytes)
	I0217 11:57:16.475900  100380 certs.go:484] found cert: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/cert.pem (1123 bytes)
	I0217 11:57:16.475927  100380 certs.go:484] found cert: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/key.pem (1675 bytes)
	I0217 11:57:16.476002  100380 certs.go:484] found cert: /home/jenkins/minikube-integration/20427-77349/.minikube/files/etc/ssl/certs/845022.pem (1708 bytes)
	I0217 11:57:16.476031  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/84502.pem -> /usr/share/ca-certificates/84502.pem
	I0217 11:57:16.476046  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/files/etc/ssl/certs/845022.pem -> /usr/share/ca-certificates/845022.pem
	I0217 11:57:16.476058  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0217 11:57:16.476652  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0217 11:57:16.507138  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0217 11:57:16.534527  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0217 11:57:16.562922  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0217 11:57:16.587311  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0217 11:57:16.624087  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0217 11:57:16.662037  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0217 11:57:16.713619  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0217 11:57:16.756345  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/certs/84502.pem --> /usr/share/ca-certificates/84502.pem (1338 bytes)
	I0217 11:57:16.803520  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/files/etc/ssl/certs/845022.pem --> /usr/share/ca-certificates/845022.pem (1708 bytes)
	I0217 11:57:16.846879  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0217 11:57:16.920267  100380 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0217 11:57:16.950648  100380 ssh_runner.go:195] Run: openssl version
	I0217 11:57:16.958784  100380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84502.pem && ln -fs /usr/share/ca-certificates/84502.pem /etc/ssl/certs/84502.pem"
	I0217 11:57:16.987238  100380 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84502.pem
	I0217 11:57:16.994220  100380 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 17 11:42 /usr/share/ca-certificates/84502.pem
	I0217 11:57:16.994283  100380 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84502.pem
	I0217 11:57:17.016466  100380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84502.pem /etc/ssl/certs/51391683.0"
	I0217 11:57:17.039972  100380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/845022.pem && ln -fs /usr/share/ca-certificates/845022.pem /etc/ssl/certs/845022.pem"
	I0217 11:57:17.061818  100380 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/845022.pem
	I0217 11:57:17.068988  100380 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 17 11:42 /usr/share/ca-certificates/845022.pem
	I0217 11:57:17.069057  100380 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/845022.pem
	I0217 11:57:17.075953  100380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/845022.pem /etc/ssl/certs/3ec20f2e.0"
	I0217 11:57:17.094161  100380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0217 11:57:17.111313  100380 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0217 11:57:17.116268  100380 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 17 11:35 /usr/share/ca-certificates/minikubeCA.pem
	I0217 11:57:17.116335  100380 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0217 11:57:17.122743  100380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0217 11:57:17.141827  100380 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0217 11:57:17.146771  100380 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0217 11:57:17.158301  100380 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0217 11:57:17.170200  100380 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0217 11:57:17.177413  100380 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0217 11:57:17.186556  100380 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0217 11:57:17.193933  100380 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0217 11:57:17.203839  100380 kubeadm.go:392] StartCluster: {Name:ha-783738 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-783738 Namespace:def
ault APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.31 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.168 Port:0 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gad
get:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0217 11:57:17.204089  100380 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0217 11:57:17.225257  100380 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0217 11:57:17.236858  100380 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0217 11:57:17.236876  100380 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0217 11:57:17.236920  100380 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0217 11:57:17.246285  100380 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0217 11:57:17.246828  100380 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-783738" does not appear in /home/jenkins/minikube-integration/20427-77349/kubeconfig
	I0217 11:57:17.246986  100380 kubeconfig.go:62] /home/jenkins/minikube-integration/20427-77349/kubeconfig needs updating (will repair): [kubeconfig missing "ha-783738" cluster setting kubeconfig missing "ha-783738" context setting]
	I0217 11:57:17.247367  100380 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20427-77349/kubeconfig: {Name:mka23a5c17f10bb58374e83755a2ac6a44464e11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0217 11:57:17.247895  100380 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/20427-77349/kubeconfig
	I0217 11:57:17.248117  100380 kapi.go:59] client config for ha-783738: &rest.Config{Host:"https://192.168.39.249:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/client.crt", KeyFile:"/home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/client.key", CAFile:"/home/jenkins/minikube-integration/20427-77349/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24df700), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0217 11:57:17.248591  100380 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0217 11:57:17.248610  100380 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0217 11:57:17.248615  100380 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0217 11:57:17.248619  100380 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0217 11:57:17.248634  100380 cert_rotation.go:140] Starting client certificate rotation controller
	I0217 11:57:17.249054  100380 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0217 11:57:17.258029  100380 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.249
	I0217 11:57:17.258053  100380 kubeadm.go:597] duration metric: took 21.170416ms to restartPrimaryControlPlane
	I0217 11:57:17.258062  100380 kubeadm.go:394] duration metric: took 54.240079ms to StartCluster
	I0217 11:57:17.258077  100380 settings.go:142] acquiring lock: {Name:mkf730c657b1c2d5a481dbeb02dabe7dfa17f2d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0217 11:57:17.258150  100380 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20427-77349/kubeconfig
	I0217 11:57:17.258639  100380 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20427-77349/kubeconfig: {Name:mka23a5c17f10bb58374e83755a2ac6a44464e11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0217 11:57:17.258848  100380 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0217 11:57:17.258870  100380 start.go:241] waiting for startup goroutines ...
	I0217 11:57:17.258884  100380 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0217 11:57:17.259112  100380 config.go:182] Loaded profile config "ha-783738": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0217 11:57:17.261397  100380 out.go:177] * Enabled addons: 
	I0217 11:57:17.262668  100380 addons.go:514] duration metric: took 3.785415ms for enable addons: enabled=[]
	I0217 11:57:17.262703  100380 start.go:246] waiting for cluster config update ...
	I0217 11:57:17.262713  100380 start.go:255] writing updated cluster config ...
	I0217 11:57:17.264127  100380 out.go:201] 
	I0217 11:57:17.265577  100380 config.go:182] Loaded profile config "ha-783738": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0217 11:57:17.265703  100380 profile.go:143] Saving config to /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/config.json ...
	I0217 11:57:17.267570  100380 out.go:177] * Starting "ha-783738-m02" control-plane node in "ha-783738" cluster
	I0217 11:57:17.268921  100380 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0217 11:57:17.268950  100380 cache.go:56] Caching tarball of preloaded images
	I0217 11:57:17.269061  100380 preload.go:172] Found /home/jenkins/minikube-integration/20427-77349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0217 11:57:17.269074  100380 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on docker
	I0217 11:57:17.269250  100380 profile.go:143] Saving config to /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/config.json ...
	I0217 11:57:17.269484  100380 start.go:360] acquireMachinesLock for ha-783738-m02: {Name:mk05ba8323ae77ab7dcc14c378d65810d956fdc0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0217 11:57:17.269554  100380 start.go:364] duration metric: took 46.103µs to acquireMachinesLock for "ha-783738-m02"
	I0217 11:57:17.269576  100380 start.go:96] Skipping create...Using existing machine configuration
	I0217 11:57:17.269584  100380 fix.go:54] fixHost starting: m02
	I0217 11:57:17.269846  100380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0217 11:57:17.269891  100380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0217 11:57:17.284961  100380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45093
	I0217 11:57:17.285438  100380 main.go:141] libmachine: () Calling .GetVersion
	I0217 11:57:17.285964  100380 main.go:141] libmachine: Using API Version  1
	I0217 11:57:17.285991  100380 main.go:141] libmachine: () Calling .SetConfigRaw
	I0217 11:57:17.286358  100380 main.go:141] libmachine: () Calling .GetMachineName
	I0217 11:57:17.286562  100380 main.go:141] libmachine: (ha-783738-m02) Calling .DriverName
	I0217 11:57:17.286744  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetState
	I0217 11:57:17.288288  100380 fix.go:112] recreateIfNeeded on ha-783738-m02: state=Stopped err=<nil>
	I0217 11:57:17.288317  100380 main.go:141] libmachine: (ha-783738-m02) Calling .DriverName
	W0217 11:57:17.288473  100380 fix.go:138] unexpected machine state, will restart: <nil>
	I0217 11:57:17.290496  100380 out.go:177] * Restarting existing kvm2 VM for "ha-783738-m02" ...
	I0217 11:57:17.291737  100380 main.go:141] libmachine: (ha-783738-m02) Calling .Start
	I0217 11:57:17.291936  100380 main.go:141] libmachine: (ha-783738-m02) starting domain...
	I0217 11:57:17.291957  100380 main.go:141] libmachine: (ha-783738-m02) ensuring networks are active...
	I0217 11:57:17.292625  100380 main.go:141] libmachine: (ha-783738-m02) Ensuring network default is active
	I0217 11:57:17.292935  100380 main.go:141] libmachine: (ha-783738-m02) Ensuring network mk-ha-783738 is active
	I0217 11:57:17.293260  100380 main.go:141] libmachine: (ha-783738-m02) getting domain XML...
	I0217 11:57:17.293893  100380 main.go:141] libmachine: (ha-783738-m02) creating domain...
	I0217 11:57:18.506378  100380 main.go:141] libmachine: (ha-783738-m02) waiting for IP...
	I0217 11:57:18.507364  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:18.507881  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:18.507974  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:18.507878  100573 retry.go:31] will retry after 190.071186ms: waiting for domain to come up
	I0217 11:57:18.699203  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:18.699617  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:18.699682  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:18.699590  100573 retry.go:31] will retry after 254.022024ms: waiting for domain to come up
	I0217 11:57:18.955132  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:18.955578  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:18.955602  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:18.955533  100573 retry.go:31] will retry after 332.594264ms: waiting for domain to come up
	I0217 11:57:19.290041  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:19.290494  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:19.290519  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:19.290472  100573 retry.go:31] will retry after 550.484931ms: waiting for domain to come up
	I0217 11:57:19.842363  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:19.842844  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:19.842873  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:19.842822  100573 retry.go:31] will retry after 743.60757ms: waiting for domain to come up
	I0217 11:57:20.587667  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:20.588025  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:20.588058  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:20.587981  100573 retry.go:31] will retry after 701.750144ms: waiting for domain to come up
	I0217 11:57:21.290980  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:21.291500  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:21.291530  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:21.291445  100573 retry.go:31] will retry after 755.313925ms: waiting for domain to come up
	I0217 11:57:22.047876  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:22.048286  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:22.048318  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:22.048246  100573 retry.go:31] will retry after 1.338224716s: waiting for domain to come up
	I0217 11:57:23.388238  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:23.388759  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:23.388796  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:23.388727  100573 retry.go:31] will retry after 1.367661407s: waiting for domain to come up
	I0217 11:57:24.758376  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:24.758722  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:24.758764  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:24.758718  100573 retry.go:31] will retry after 2.08548116s: waiting for domain to come up
	I0217 11:57:26.846621  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:26.847150  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:26.847253  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:26.847166  100573 retry.go:31] will retry after 1.933968455s: waiting for domain to come up
	I0217 11:57:28.782369  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:28.782785  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:28.782815  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:28.782752  100573 retry.go:31] will retry after 3.162167749s: waiting for domain to come up
	I0217 11:57:31.947188  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:31.947578  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:31.947603  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:31.947545  100573 retry.go:31] will retry after 3.924986004s: waiting for domain to come up
	I0217 11:57:35.877102  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:35.877437  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has current primary IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:35.877460  100380 main.go:141] libmachine: (ha-783738-m02) found domain IP: 192.168.39.31
	I0217 11:57:35.877473  100380 main.go:141] libmachine: (ha-783738-m02) reserving static IP address...
	I0217 11:57:35.877915  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "ha-783738-m02", mac: "52:54:00:06:81:a2", ip: "192.168.39.31"} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:35.877942  100380 main.go:141] libmachine: (ha-783738-m02) DBG | skip adding static IP to network mk-ha-783738 - found existing host DHCP lease matching {name: "ha-783738-m02", mac: "52:54:00:06:81:a2", ip: "192.168.39.31"}
	I0217 11:57:35.877960  100380 main.go:141] libmachine: (ha-783738-m02) reserved static IP address 192.168.39.31 for domain ha-783738-m02
	I0217 11:57:35.877972  100380 main.go:141] libmachine: (ha-783738-m02) waiting for SSH...
	I0217 11:57:35.877983  100380 main.go:141] libmachine: (ha-783738-m02) DBG | Getting to WaitForSSH function...
	I0217 11:57:35.880382  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:35.880801  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:35.880830  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:35.880903  100380 main.go:141] libmachine: (ha-783738-m02) DBG | Using SSH client type: external
	I0217 11:57:35.880925  100380 main.go:141] libmachine: (ha-783738-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738-m02/id_rsa (-rw-------)
	I0217 11:57:35.880955  100380 main.go:141] libmachine: (ha-783738-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.31 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0217 11:57:35.880970  100380 main.go:141] libmachine: (ha-783738-m02) DBG | About to run SSH command:
	I0217 11:57:35.880982  100380 main.go:141] libmachine: (ha-783738-m02) DBG | exit 0
	I0217 11:57:36.005182  100380 main.go:141] libmachine: (ha-783738-m02) DBG | SSH cmd err, output: <nil>: 
	I0217 11:57:36.005527  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetConfigRaw
	I0217 11:57:36.006216  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetIP
	I0217 11:57:36.008704  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.009084  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:36.009118  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.009443  100380 profile.go:143] Saving config to /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/config.json ...
	I0217 11:57:36.009639  100380 machine.go:93] provisionDockerMachine start ...
	I0217 11:57:36.009657  100380 main.go:141] libmachine: (ha-783738-m02) Calling .DriverName
	I0217 11:57:36.009816  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	I0217 11:57:36.011849  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.012187  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:36.012218  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.012360  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHPort
	I0217 11:57:36.012557  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.012710  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.012836  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHUsername
	I0217 11:57:36.012947  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:36.013115  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0217 11:57:36.013130  100380 main.go:141] libmachine: About to run SSH command:
	hostname
	I0217 11:57:36.113056  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0217 11:57:36.113093  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetMachineName
	I0217 11:57:36.113376  100380 buildroot.go:166] provisioning hostname "ha-783738-m02"
	I0217 11:57:36.113403  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetMachineName
	I0217 11:57:36.113566  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	I0217 11:57:36.116233  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.116606  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:36.116634  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.116762  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHPort
	I0217 11:57:36.116907  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.117025  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.117242  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHUsername
	I0217 11:57:36.117464  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:36.117681  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0217 11:57:36.117699  100380 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-783738-m02 && echo "ha-783738-m02" | sudo tee /etc/hostname
	I0217 11:57:36.230628  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-783738-m02
	
	I0217 11:57:36.230670  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	I0217 11:57:36.233644  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.233991  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:36.234015  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.234196  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHPort
	I0217 11:57:36.234491  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.234686  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.234856  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHUsername
	I0217 11:57:36.235006  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:36.235194  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0217 11:57:36.235211  100380 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-783738-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-783738-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-783738-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0217 11:57:36.341290  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0217 11:57:36.341332  100380 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20427-77349/.minikube CaCertPath:/home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20427-77349/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20427-77349/.minikube}
	I0217 11:57:36.341348  100380 buildroot.go:174] setting up certificates
	I0217 11:57:36.341360  100380 provision.go:84] configureAuth start
	I0217 11:57:36.341373  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetMachineName
	I0217 11:57:36.341646  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetIP
	I0217 11:57:36.344453  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.344944  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:36.344981  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.345158  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	I0217 11:57:36.347416  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.347719  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:36.347744  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.347910  100380 provision.go:143] copyHostCerts
	I0217 11:57:36.347943  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20427-77349/.minikube/ca.pem
	I0217 11:57:36.347989  100380 exec_runner.go:144] found /home/jenkins/minikube-integration/20427-77349/.minikube/ca.pem, removing ...
	I0217 11:57:36.347999  100380 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20427-77349/.minikube/ca.pem
	I0217 11:57:36.348065  100380 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20427-77349/.minikube/ca.pem (1082 bytes)
	I0217 11:57:36.348156  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20427-77349/.minikube/cert.pem
	I0217 11:57:36.348190  100380 exec_runner.go:144] found /home/jenkins/minikube-integration/20427-77349/.minikube/cert.pem, removing ...
	I0217 11:57:36.348200  100380 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20427-77349/.minikube/cert.pem
	I0217 11:57:36.348229  100380 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20427-77349/.minikube/cert.pem (1123 bytes)
	I0217 11:57:36.348286  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20427-77349/.minikube/key.pem
	I0217 11:57:36.348310  100380 exec_runner.go:144] found /home/jenkins/minikube-integration/20427-77349/.minikube/key.pem, removing ...
	I0217 11:57:36.348320  100380 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20427-77349/.minikube/key.pem
	I0217 11:57:36.348347  100380 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20427-77349/.minikube/key.pem (1675 bytes)
	I0217 11:57:36.348413  100380 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20427-77349/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca-key.pem org=jenkins.ha-783738-m02 san=[127.0.0.1 192.168.39.31 ha-783738-m02 localhost minikube]
	I0217 11:57:36.476199  100380 provision.go:177] copyRemoteCerts
	I0217 11:57:36.476256  100380 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0217 11:57:36.476280  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	I0217 11:57:36.479126  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.479497  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:36.479529  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.479677  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHPort
	I0217 11:57:36.479868  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.480073  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHUsername
	I0217 11:57:36.480258  100380 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738-m02/id_rsa Username:docker}
	I0217 11:57:36.558954  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0217 11:57:36.559023  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0217 11:57:36.581755  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0217 11:57:36.581816  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0217 11:57:36.604328  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0217 11:57:36.604411  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0217 11:57:36.626183  100380 provision.go:87] duration metric: took 284.807453ms to configureAuth
	I0217 11:57:36.626219  100380 buildroot.go:189] setting minikube options for container-runtime
	I0217 11:57:36.626492  100380 config.go:182] Loaded profile config "ha-783738": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0217 11:57:36.626522  100380 main.go:141] libmachine: (ha-783738-m02) Calling .DriverName
	I0217 11:57:36.626768  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	I0217 11:57:36.629194  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.629569  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:36.629594  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.629740  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHPort
	I0217 11:57:36.629904  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.630077  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.630201  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHUsername
	I0217 11:57:36.630389  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:36.630601  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0217 11:57:36.630614  100380 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0217 11:57:36.730964  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0217 11:57:36.730995  100380 buildroot.go:70] root file system type: tmpfs
	I0217 11:57:36.731148  100380 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0217 11:57:36.731184  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	I0217 11:57:36.733718  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.734119  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:36.734150  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.734340  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHPort
	I0217 11:57:36.734539  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.734714  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.734847  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHUsername
	I0217 11:57:36.734986  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:36.735198  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0217 11:57:36.735304  100380 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.39.249"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0217 11:57:36.846599  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.39.249
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0217 11:57:36.846633  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	I0217 11:57:36.849370  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.849714  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:36.849733  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.849923  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHPort
	I0217 11:57:36.850116  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.850290  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.850443  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHUsername
	I0217 11:57:36.850608  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:36.850788  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0217 11:57:36.850805  100380 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0217 11:57:38.700010  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0217 11:57:38.700036  100380 machine.go:96] duration metric: took 2.690384734s to provisionDockerMachine
	I0217 11:57:38.700051  100380 start.go:293] postStartSetup for "ha-783738-m02" (driver="kvm2")
	I0217 11:57:38.700060  100380 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0217 11:57:38.700075  100380 main.go:141] libmachine: (ha-783738-m02) Calling .DriverName
	I0217 11:57:38.700389  100380 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0217 11:57:38.700425  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	I0217 11:57:38.703068  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:38.703435  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:38.703465  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:38.703605  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHPort
	I0217 11:57:38.703807  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:38.703952  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHUsername
	I0217 11:57:38.704102  100380 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738-m02/id_rsa Username:docker}
	I0217 11:57:38.783381  100380 ssh_runner.go:195] Run: cat /etc/os-release
	I0217 11:57:38.787188  100380 info.go:137] Remote host: Buildroot 2023.02.9
	I0217 11:57:38.787215  100380 filesync.go:126] Scanning /home/jenkins/minikube-integration/20427-77349/.minikube/addons for local assets ...
	I0217 11:57:38.787270  100380 filesync.go:126] Scanning /home/jenkins/minikube-integration/20427-77349/.minikube/files for local assets ...
	I0217 11:57:38.787341  100380 filesync.go:149] local asset: /home/jenkins/minikube-integration/20427-77349/.minikube/files/etc/ssl/certs/845022.pem -> 845022.pem in /etc/ssl/certs
	I0217 11:57:38.787352  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/files/etc/ssl/certs/845022.pem -> /etc/ssl/certs/845022.pem
	I0217 11:57:38.787430  100380 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0217 11:57:38.796091  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/files/etc/ssl/certs/845022.pem --> /etc/ssl/certs/845022.pem (1708 bytes)
	I0217 11:57:38.817716  100380 start.go:296] duration metric: took 117.649565ms for postStartSetup
	I0217 11:57:38.817759  100380 main.go:141] libmachine: (ha-783738-m02) Calling .DriverName
	I0217 11:57:38.818052  100380 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0217 11:57:38.818087  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	I0217 11:57:38.820354  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:38.820669  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:38.820694  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:38.820809  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHPort
	I0217 11:57:38.820978  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:38.821138  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHUsername
	I0217 11:57:38.821273  100380 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738-m02/id_rsa Username:docker}
	I0217 11:57:38.900214  100380 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0217 11:57:38.900294  100380 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0217 11:57:38.959273  100380 fix.go:56] duration metric: took 21.689681729s for fixHost
	I0217 11:57:38.959327  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	I0217 11:57:38.961853  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:38.962326  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:38.962364  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:38.962591  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHPort
	I0217 11:57:38.962788  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:38.962952  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:38.963062  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHUsername
	I0217 11:57:38.963238  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:38.963408  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0217 11:57:38.963419  100380 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0217 11:57:39.071315  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739793459.049434891
	
	I0217 11:57:39.071339  100380 fix.go:216] guest clock: 1739793459.049434891
	I0217 11:57:39.071349  100380 fix.go:229] Guest: 2025-02-17 11:57:39.049434891 +0000 UTC Remote: 2025-02-17 11:57:38.959302801 +0000 UTC m=+48.782039917 (delta=90.13209ms)
	I0217 11:57:39.071366  100380 fix.go:200] guest clock delta is within tolerance: 90.13209ms
	I0217 11:57:39.071371  100380 start.go:83] releasing machines lock for "ha-783738-m02", held for 21.801804436s
	I0217 11:57:39.071393  100380 main.go:141] libmachine: (ha-783738-m02) Calling .DriverName
	I0217 11:57:39.071600  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetIP
	I0217 11:57:39.074321  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:39.074707  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:39.074736  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:39.076949  100380 out.go:177] * Found network options:
	I0217 11:57:39.078428  100380 out.go:177]   - NO_PROXY=192.168.39.249
	W0217 11:57:39.079686  100380 proxy.go:119] fail to check proxy env: Error ip not in block
	I0217 11:57:39.079714  100380 main.go:141] libmachine: (ha-783738-m02) Calling .DriverName
	I0217 11:57:39.080218  100380 main.go:141] libmachine: (ha-783738-m02) Calling .DriverName
	I0217 11:57:39.080403  100380 main.go:141] libmachine: (ha-783738-m02) Calling .DriverName
	I0217 11:57:39.080510  100380 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0217 11:57:39.080551  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	W0217 11:57:39.080631  100380 proxy.go:119] fail to check proxy env: Error ip not in block
	I0217 11:57:39.080722  100380 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0217 11:57:39.080748  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	I0217 11:57:39.083432  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:39.083453  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:39.083887  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:39.083914  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:39.083933  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:39.083949  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:39.084264  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHPort
	I0217 11:57:39.084411  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:39.084597  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHUsername
	I0217 11:57:39.084609  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHPort
	I0217 11:57:39.084763  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:39.084784  100380 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738-m02/id_rsa Username:docker}
	I0217 11:57:39.084915  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHUsername
	I0217 11:57:39.085034  100380 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738-m02/id_rsa Username:docker}
	W0217 11:57:39.178061  100380 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0217 11:57:39.178137  100380 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0217 11:57:39.195964  100380 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0217 11:57:39.196001  100380 start.go:495] detecting cgroup driver to use...
	I0217 11:57:39.196148  100380 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0217 11:57:39.216666  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0217 11:57:39.226815  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0217 11:57:39.236611  100380 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0217 11:57:39.236669  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0217 11:57:39.246500  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0217 11:57:39.256691  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0217 11:57:39.266509  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0217 11:57:39.276231  100380 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0217 11:57:39.286298  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0217 11:57:39.296149  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0217 11:57:39.305984  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0217 11:57:39.315650  100380 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0217 11:57:39.324721  100380 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0217 11:57:39.324777  100380 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0217 11:57:39.334429  100380 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0217 11:57:39.343052  100380 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0217 11:57:39.458041  100380 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0217 11:57:39.483361  100380 start.go:495] detecting cgroup driver to use...
	I0217 11:57:39.483453  100380 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0217 11:57:39.501404  100380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0217 11:57:39.522545  100380 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0217 11:57:39.545214  100380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0217 11:57:39.557462  100380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0217 11:57:39.569445  100380 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0217 11:57:39.593668  100380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0217 11:57:39.606767  100380 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0217 11:57:39.623713  100380 ssh_runner.go:195] Run: which cri-dockerd
	I0217 11:57:39.627306  100380 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0217 11:57:39.635920  100380 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0217 11:57:39.651184  100380 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0217 11:57:39.767938  100380 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0217 11:57:39.884761  100380 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0217 11:57:39.884806  100380 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0217 11:57:39.900934  100380 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0217 11:57:40.013206  100380 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0217 11:58:41.088581  100380 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.075335279s)
	I0217 11:58:41.088680  100380 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0217 11:58:41.109373  100380 out.go:201] 
	W0217 11:58:41.110918  100380 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Feb 17 11:57:37 ha-783738-m02 systemd[1]: Starting Docker Application Container Engine...
	Feb 17 11:57:37 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:37.207555071Z" level=info msg="Starting up"
	Feb 17 11:57:37 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:37.208523706Z" level=info msg="containerd not running, starting managed containerd"
	Feb 17 11:57:37 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:37.209284365Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=499
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.234357473Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.253922324Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.254071326Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.254155313Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.254195097Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.254502645Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.254572700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.254826671Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.254880442Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.254926515Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.254965881Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.255209553Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.255502921Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.257578132Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.257723954Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.257912930Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.257960933Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.258214223Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.258292090Z" level=info msg="metadata content store policy set" policy=shared
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.262281766Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.262389757Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.262437193Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.262478052Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.262523730Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.262614966Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.262915194Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263049035Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263094390Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263137669Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263176270Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263217488Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263254710Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263292496Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263339613Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263377065Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263418085Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263453223Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263511094Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263549833Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263589341Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263631649Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263726157Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263766086Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263809930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263847665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263885358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263932212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263972615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264020660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264063975Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264103157Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264158305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264194401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264230305Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264327104Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264417123Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264457690Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264499822Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264534568Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264575047Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264616722Z" level=info msg="NRI interface is disabled by configuration."
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264938960Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.265032087Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.265091203Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.265132167Z" level=info msg="containerd successfully booted in 0.032037s"
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.237803305Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.295143778Z" level=info msg="Loading containers: start."
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.484051173Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.565431513Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.632528889Z" level=info msg="Loading containers: done."
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.653906274Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.653941707Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.653962858Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.654196375Z" level=info msg="Daemon has completed initialization"
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.676178691Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.676315120Z" level=info msg="API listen on [::]:2376"
	Feb 17 11:57:38 ha-783738-m02 systemd[1]: Started Docker Application Container Engine.
	Feb 17 11:57:40 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:40.005718953Z" level=info msg="Processing signal 'terminated'"
	Feb 17 11:57:40 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:40.007186879Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Feb 17 11:57:40 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:40.007378782Z" level=info msg="Daemon shutdown complete"
	Feb 17 11:57:40 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:40.007446197Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Feb 17 11:57:40 ha-783738-m02 systemd[1]: Stopping Docker Application Container Engine...
	Feb 17 11:57:40 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:40.008214930Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Feb 17 11:57:41 ha-783738-m02 systemd[1]: docker.service: Deactivated successfully.
	Feb 17 11:57:41 ha-783738-m02 systemd[1]: Stopped Docker Application Container Engine.
	Feb 17 11:57:41 ha-783738-m02 systemd[1]: Starting Docker Application Container Engine...
	Feb 17 11:57:41 ha-783738-m02 dockerd[1120]: time="2025-02-17T11:57:41.051838490Z" level=info msg="Starting up"
	Feb 17 11:58:41 ha-783738-m02 dockerd[1120]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Feb 17 11:58:41 ha-783738-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Feb 17 11:58:41 ha-783738-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Feb 17 11:58:41 ha-783738-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Feb 17 11:57:37 ha-783738-m02 systemd[1]: Starting Docker Application Container Engine...
	Feb 17 11:57:37 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:37.207555071Z" level=info msg="Starting up"
	Feb 17 11:57:37 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:37.208523706Z" level=info msg="containerd not running, starting managed containerd"
	Feb 17 11:57:37 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:37.209284365Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=499
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.234357473Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.253922324Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.254071326Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.254155313Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.254195097Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.254502645Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.254572700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.254826671Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.254880442Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.254926515Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.254965881Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.255209553Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.255502921Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.257578132Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.257723954Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.257912930Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.257960933Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.258214223Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.258292090Z" level=info msg="metadata content store policy set" policy=shared
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.262281766Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.262389757Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.262437193Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.262478052Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.262523730Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.262614966Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.262915194Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263049035Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263094390Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263137669Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263176270Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263217488Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263254710Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263292496Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263339613Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263377065Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263418085Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263453223Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263511094Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263549833Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263589341Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263631649Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263726157Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263766086Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263809930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263847665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263885358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263932212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263972615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264020660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264063975Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264103157Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264158305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264194401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264230305Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264327104Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264417123Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264457690Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264499822Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264534568Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264575047Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264616722Z" level=info msg="NRI interface is disabled by configuration."
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264938960Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.265032087Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.265091203Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.265132167Z" level=info msg="containerd successfully booted in 0.032037s"
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.237803305Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.295143778Z" level=info msg="Loading containers: start."
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.484051173Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.565431513Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.632528889Z" level=info msg="Loading containers: done."
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.653906274Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.653941707Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.653962858Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.654196375Z" level=info msg="Daemon has completed initialization"
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.676178691Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.676315120Z" level=info msg="API listen on [::]:2376"
	Feb 17 11:57:38 ha-783738-m02 systemd[1]: Started Docker Application Container Engine.
	Feb 17 11:57:40 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:40.005718953Z" level=info msg="Processing signal 'terminated'"
	Feb 17 11:57:40 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:40.007186879Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Feb 17 11:57:40 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:40.007378782Z" level=info msg="Daemon shutdown complete"
	Feb 17 11:57:40 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:40.007446197Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Feb 17 11:57:40 ha-783738-m02 systemd[1]: Stopping Docker Application Container Engine...
	Feb 17 11:57:40 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:40.008214930Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Feb 17 11:57:41 ha-783738-m02 systemd[1]: docker.service: Deactivated successfully.
	Feb 17 11:57:41 ha-783738-m02 systemd[1]: Stopped Docker Application Container Engine.
	Feb 17 11:57:41 ha-783738-m02 systemd[1]: Starting Docker Application Container Engine...
	Feb 17 11:57:41 ha-783738-m02 dockerd[1120]: time="2025-02-17T11:57:41.051838490Z" level=info msg="Starting up"
	Feb 17 11:58:41 ha-783738-m02 dockerd[1120]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Feb 17 11:58:41 ha-783738-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Feb 17 11:58:41 ha-783738-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Feb 17 11:58:41 ha-783738-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0217 11:58:41.110964  100380 out.go:270] * 
	* 
	W0217 11:58:41.111815  100380 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0217 11:58:41.113412  100380 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:564: failed to start cluster. args "out/minikube-linux-amd64 start -p ha-783738 --wait=true -v=7 --alsologtostderr --driver=kvm2 " : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-783738 -n ha-783738
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-783738 -n ha-783738: exit status 2 (235.921283ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 logs -n 25
helpers_test.go:252: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-783738 cp ha-783738-m03:/home/docker/cp-test.txt                              | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:50 UTC | 17 Feb 25 11:50 UTC |
	|         | ha-783738-m04:/home/docker/cp-test_ha-783738-m03_ha-783738-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-783738 ssh -n                                                                 | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:50 UTC | 17 Feb 25 11:50 UTC |
	|         | ha-783738-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-783738 ssh -n ha-783738-m04 sudo cat                                          | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:50 UTC | 17 Feb 25 11:50 UTC |
	|         | /home/docker/cp-test_ha-783738-m03_ha-783738-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-783738 cp testdata/cp-test.txt                                                | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:50 UTC | 17 Feb 25 11:50 UTC |
	|         | ha-783738-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-783738 ssh -n                                                                 | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:50 UTC | 17 Feb 25 11:50 UTC |
	|         | ha-783738-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-783738 cp ha-783738-m04:/home/docker/cp-test.txt                              | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:50 UTC | 17 Feb 25 11:51 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3703533036/001/cp-test_ha-783738-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-783738 ssh -n                                                                 | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:51 UTC | 17 Feb 25 11:51 UTC |
	|         | ha-783738-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-783738 cp ha-783738-m04:/home/docker/cp-test.txt                              | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:51 UTC | 17 Feb 25 11:51 UTC |
	|         | ha-783738:/home/docker/cp-test_ha-783738-m04_ha-783738.txt                       |           |         |         |                     |                     |
	| ssh     | ha-783738 ssh -n                                                                 | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:51 UTC | 17 Feb 25 11:51 UTC |
	|         | ha-783738-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-783738 ssh -n ha-783738 sudo cat                                              | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:51 UTC | 17 Feb 25 11:51 UTC |
	|         | /home/docker/cp-test_ha-783738-m04_ha-783738.txt                                 |           |         |         |                     |                     |
	| cp      | ha-783738 cp ha-783738-m04:/home/docker/cp-test.txt                              | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:51 UTC | 17 Feb 25 11:51 UTC |
	|         | ha-783738-m02:/home/docker/cp-test_ha-783738-m04_ha-783738-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-783738 ssh -n                                                                 | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:51 UTC | 17 Feb 25 11:51 UTC |
	|         | ha-783738-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-783738 ssh -n ha-783738-m02 sudo cat                                          | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:51 UTC | 17 Feb 25 11:51 UTC |
	|         | /home/docker/cp-test_ha-783738-m04_ha-783738-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-783738 cp ha-783738-m04:/home/docker/cp-test.txt                              | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:51 UTC | 17 Feb 25 11:51 UTC |
	|         | ha-783738-m03:/home/docker/cp-test_ha-783738-m04_ha-783738-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-783738 ssh -n                                                                 | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:51 UTC | 17 Feb 25 11:51 UTC |
	|         | ha-783738-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-783738 ssh -n ha-783738-m03 sudo cat                                          | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:51 UTC | 17 Feb 25 11:51 UTC |
	|         | /home/docker/cp-test_ha-783738-m04_ha-783738-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-783738 node stop m02 -v=7                                                     | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:51 UTC | 17 Feb 25 11:51 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-783738 node start m02 -v=7                                                    | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:51 UTC | 17 Feb 25 11:51 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-783738 -v=7                                                           | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:52 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-783738 -v=7                                                                | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:52 UTC | 17 Feb 25 11:52 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-783738 --wait=true -v=7                                                    | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:52 UTC | 17 Feb 25 11:56 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-783738                                                                | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:56 UTC |                     |
	| node    | ha-783738 node delete m03 -v=7                                                   | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:56 UTC | 17 Feb 25 11:56 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-783738 stop -v=7                                                              | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:56 UTC | 17 Feb 25 11:56 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-783738 --wait=true                                                         | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:56 UTC |                     |
	|         | -v=7 --alsologtostderr                                                           |           |         |         |                     |                     |
	|         | --driver=kvm2                                                                    |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/17 11:56:50
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0217 11:56:50.215291  100380 out.go:345] Setting OutFile to fd 1 ...
	I0217 11:56:50.215609  100380 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 11:56:50.215619  100380 out.go:358] Setting ErrFile to fd 2...
	I0217 11:56:50.215624  100380 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 11:56:50.215819  100380 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20427-77349/.minikube/bin
	I0217 11:56:50.216353  100380 out.go:352] Setting JSON to false
	I0217 11:56:50.217237  100380 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":5958,"bootTime":1739787452,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0217 11:56:50.217362  100380 start.go:139] virtualization: kvm guest
	I0217 11:56:50.219910  100380 out.go:177] * [ha-783738] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0217 11:56:50.221323  100380 out.go:177]   - MINIKUBE_LOCATION=20427
	I0217 11:56:50.221334  100380 notify.go:220] Checking for updates...
	I0217 11:56:50.223835  100380 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0217 11:56:50.224954  100380 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20427-77349/kubeconfig
	I0217 11:56:50.226180  100380 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20427-77349/.minikube
	I0217 11:56:50.227361  100380 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0217 11:56:50.228473  100380 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0217 11:56:50.229885  100380 config.go:182] Loaded profile config "ha-783738": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0217 11:56:50.230261  100380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0217 11:56:50.230308  100380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0217 11:56:50.245239  100380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46091
	I0217 11:56:50.245761  100380 main.go:141] libmachine: () Calling .GetVersion
	I0217 11:56:50.246359  100380 main.go:141] libmachine: Using API Version  1
	I0217 11:56:50.246382  100380 main.go:141] libmachine: () Calling .SetConfigRaw
	I0217 11:56:50.246775  100380 main.go:141] libmachine: () Calling .GetMachineName
	I0217 11:56:50.246962  100380 main.go:141] libmachine: (ha-783738) Calling .DriverName
	I0217 11:56:50.247230  100380 driver.go:394] Setting default libvirt URI to qemu:///system
	I0217 11:56:50.247538  100380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0217 11:56:50.247594  100380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0217 11:56:50.262713  100380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36011
	I0217 11:56:50.263097  100380 main.go:141] libmachine: () Calling .GetVersion
	I0217 11:56:50.263692  100380 main.go:141] libmachine: Using API Version  1
	I0217 11:56:50.263752  100380 main.go:141] libmachine: () Calling .SetConfigRaw
	I0217 11:56:50.264059  100380 main.go:141] libmachine: () Calling .GetMachineName
	I0217 11:56:50.264289  100380 main.go:141] libmachine: (ha-783738) Calling .DriverName
	I0217 11:56:50.297981  100380 out.go:177] * Using the kvm2 driver based on existing profile
	I0217 11:56:50.299143  100380 start.go:297] selected driver: kvm2
	I0217 11:56:50.299155  100380 start.go:901] validating driver "kvm2" against &{Name:ha-783738 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-78
3738 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.31 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.168 Port:0 KubernetesVersion:v1.32.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0217 11:56:50.299304  100380 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0217 11:56:50.299646  100380 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0217 11:56:50.299706  100380 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20427-77349/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0217 11:56:50.314229  100380 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0217 11:56:50.314917  100380 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0217 11:56:50.314949  100380 cni.go:84] Creating CNI manager for ""
	I0217 11:56:50.315000  100380 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0217 11:56:50.315060  100380 start.go:340] cluster config:
	{Name:ha-783738 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-783738 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.31 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.168 Port:0 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kub
eflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0217 11:56:50.315190  100380 iso.go:125] acquiring lock: {Name:mk4380b7bda8fcd8bced9705ff1695c3fb7dac0d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0217 11:56:50.317519  100380 out.go:177] * Starting "ha-783738" primary control-plane node in "ha-783738" cluster
	I0217 11:56:50.318547  100380 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0217 11:56:50.318578  100380 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20427-77349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4
	I0217 11:56:50.318588  100380 cache.go:56] Caching tarball of preloaded images
	I0217 11:56:50.318681  100380 preload.go:172] Found /home/jenkins/minikube-integration/20427-77349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0217 11:56:50.318695  100380 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on docker
	I0217 11:56:50.318829  100380 profile.go:143] Saving config to /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/config.json ...
	I0217 11:56:50.319009  100380 start.go:360] acquireMachinesLock for ha-783738: {Name:mk05ba8323ae77ab7dcc14c378d65810d956fdc0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0217 11:56:50.319055  100380 start.go:364] duration metric: took 23.519µs to acquireMachinesLock for "ha-783738"
	I0217 11:56:50.319080  100380 start.go:96] Skipping create...Using existing machine configuration
	I0217 11:56:50.319088  100380 fix.go:54] fixHost starting: 
	I0217 11:56:50.319353  100380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0217 11:56:50.319391  100380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0217 11:56:50.333761  100380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34803
	I0217 11:56:50.334152  100380 main.go:141] libmachine: () Calling .GetVersion
	I0217 11:56:50.334693  100380 main.go:141] libmachine: Using API Version  1
	I0217 11:56:50.334714  100380 main.go:141] libmachine: () Calling .SetConfigRaw
	I0217 11:56:50.335000  100380 main.go:141] libmachine: () Calling .GetMachineName
	I0217 11:56:50.335210  100380 main.go:141] libmachine: (ha-783738) Calling .DriverName
	I0217 11:56:50.335347  100380 main.go:141] libmachine: (ha-783738) Calling .GetState
	I0217 11:56:50.336730  100380 fix.go:112] recreateIfNeeded on ha-783738: state=Stopped err=<nil>
	I0217 11:56:50.336752  100380 main.go:141] libmachine: (ha-783738) Calling .DriverName
	W0217 11:56:50.336864  100380 fix.go:138] unexpected machine state, will restart: <nil>
	I0217 11:56:50.338814  100380 out.go:177] * Restarting existing kvm2 VM for "ha-783738" ...
	I0217 11:56:50.340020  100380 main.go:141] libmachine: (ha-783738) Calling .Start
	I0217 11:56:50.340200  100380 main.go:141] libmachine: (ha-783738) starting domain...
	I0217 11:56:50.340221  100380 main.go:141] libmachine: (ha-783738) ensuring networks are active...
	I0217 11:56:50.340845  100380 main.go:141] libmachine: (ha-783738) Ensuring network default is active
	I0217 11:56:50.341268  100380 main.go:141] libmachine: (ha-783738) Ensuring network mk-ha-783738 is active
	I0217 11:56:50.341612  100380 main.go:141] libmachine: (ha-783738) getting domain XML...
	I0217 11:56:50.342286  100380 main.go:141] libmachine: (ha-783738) creating domain...
	I0217 11:56:51.533335  100380 main.go:141] libmachine: (ha-783738) waiting for IP...
	I0217 11:56:51.534198  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:56:51.534571  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:56:51.534631  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:56:51.534554  100416 retry.go:31] will retry after 214.112758ms: waiting for domain to come up
	I0217 11:56:51.750038  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:56:51.750535  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:56:51.750587  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:56:51.750528  100416 retry.go:31] will retry after 287.575076ms: waiting for domain to come up
	I0217 11:56:52.040019  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:56:52.040473  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:56:52.040515  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:56:52.040452  100416 retry.go:31] will retry after 303.389275ms: waiting for domain to come up
	I0217 11:56:52.345057  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:56:52.345400  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:56:52.345452  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:56:52.345383  100416 retry.go:31] will retry after 580.610288ms: waiting for domain to come up
	I0217 11:56:52.927102  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:56:52.927623  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:56:52.927663  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:56:52.927596  100416 retry.go:31] will retry after 470.88869ms: waiting for domain to come up
	I0217 11:56:53.400293  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:56:53.400698  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:56:53.400725  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:56:53.400636  100416 retry.go:31] will retry after 645.102407ms: waiting for domain to come up
	I0217 11:56:54.046798  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:56:54.047309  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:56:54.047365  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:56:54.047265  100416 retry.go:31] will retry after 993.016218ms: waiting for domain to come up
	I0217 11:56:55.041450  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:56:55.041808  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:56:55.041828  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:56:55.041790  100416 retry.go:31] will retry after 1.096274529s: waiting for domain to come up
	I0217 11:56:56.139475  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:56:56.139892  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:56:56.139957  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:56:56.139882  100416 retry.go:31] will retry after 1.840421804s: waiting for domain to come up
	I0217 11:56:57.981618  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:56:57.982040  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:56:57.982068  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:56:57.981979  100416 retry.go:31] will retry after 1.8969141s: waiting for domain to come up
	I0217 11:56:59.881026  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:56:59.881535  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:56:59.881570  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:56:59.881471  100416 retry.go:31] will retry after 1.890240518s: waiting for domain to come up
	I0217 11:57:01.773274  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:01.773728  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:57:01.773779  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:57:01.773696  100416 retry.go:31] will retry after 3.046762911s: waiting for domain to come up
	I0217 11:57:04.823999  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:04.824458  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:57:04.824497  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:57:04.824453  100416 retry.go:31] will retry after 3.819063496s: waiting for domain to come up
	I0217 11:57:08.647831  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:08.648309  100380 main.go:141] libmachine: (ha-783738) found domain IP: 192.168.39.249
	I0217 11:57:08.648334  100380 main.go:141] libmachine: (ha-783738) reserving static IP address...
	I0217 11:57:08.648347  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has current primary IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:08.648799  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "ha-783738", mac: "52:54:00:fb:6f:65", ip: "192.168.39.249"} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:08.648824  100380 main.go:141] libmachine: (ha-783738) DBG | skip adding static IP to network mk-ha-783738 - found existing host DHCP lease matching {name: "ha-783738", mac: "52:54:00:fb:6f:65", ip: "192.168.39.249"}
	I0217 11:57:08.648835  100380 main.go:141] libmachine: (ha-783738) reserved static IP address 192.168.39.249 for domain ha-783738
	I0217 11:57:08.648846  100380 main.go:141] libmachine: (ha-783738) waiting for SSH...
	I0217 11:57:08.648862  100380 main.go:141] libmachine: (ha-783738) DBG | Getting to WaitForSSH function...
	I0217 11:57:08.650828  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:08.651193  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:08.651224  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:08.651387  100380 main.go:141] libmachine: (ha-783738) DBG | Using SSH client type: external
	I0217 11:57:08.651414  100380 main.go:141] libmachine: (ha-783738) DBG | Using SSH private key: /home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738/id_rsa (-rw-------)
	I0217 11:57:08.651435  100380 main.go:141] libmachine: (ha-783738) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.249 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0217 11:57:08.651464  100380 main.go:141] libmachine: (ha-783738) DBG | About to run SSH command:
	I0217 11:57:08.651480  100380 main.go:141] libmachine: (ha-783738) DBG | exit 0
	I0217 11:57:08.776922  100380 main.go:141] libmachine: (ha-783738) DBG | SSH cmd err, output: <nil>: 
	I0217 11:57:08.777326  100380 main.go:141] libmachine: (ha-783738) Calling .GetConfigRaw
	I0217 11:57:08.777959  100380 main.go:141] libmachine: (ha-783738) Calling .GetIP
	I0217 11:57:08.780301  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:08.780692  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:08.780735  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:08.780948  100380 profile.go:143] Saving config to /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/config.json ...
	I0217 11:57:08.781137  100380 machine.go:93] provisionDockerMachine start ...
	I0217 11:57:08.781154  100380 main.go:141] libmachine: (ha-783738) Calling .DriverName
	I0217 11:57:08.781442  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:08.783478  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:08.783868  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:08.783897  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:08.784048  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHPort
	I0217 11:57:08.784237  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:08.784393  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:08.784570  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHUsername
	I0217 11:57:08.784738  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:08.784917  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0217 11:57:08.784928  100380 main.go:141] libmachine: About to run SSH command:
	hostname
	I0217 11:57:08.889484  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0217 11:57:08.889525  100380 main.go:141] libmachine: (ha-783738) Calling .GetMachineName
	I0217 11:57:08.889783  100380 buildroot.go:166] provisioning hostname "ha-783738"
	I0217 11:57:08.889818  100380 main.go:141] libmachine: (ha-783738) Calling .GetMachineName
	I0217 11:57:08.890003  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:08.892666  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:08.893027  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:08.893060  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:08.893202  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHPort
	I0217 11:57:08.893391  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:08.893536  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:08.893661  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHUsername
	I0217 11:57:08.893787  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:08.893949  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0217 11:57:08.893960  100380 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-783738 && echo "ha-783738" | sudo tee /etc/hostname
	I0217 11:57:09.014626  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-783738
	
	I0217 11:57:09.014653  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:09.017274  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.017710  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:09.017744  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.017939  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHPort
	I0217 11:57:09.018131  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:09.018348  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:09.018473  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHUsername
	I0217 11:57:09.018701  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:09.018967  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0217 11:57:09.018994  100380 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-783738' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-783738/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-783738' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0217 11:57:09.133208  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0217 11:57:09.133247  100380 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20427-77349/.minikube CaCertPath:/home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20427-77349/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20427-77349/.minikube}
	I0217 11:57:09.133278  100380 buildroot.go:174] setting up certificates
	I0217 11:57:09.133295  100380 provision.go:84] configureAuth start
	I0217 11:57:09.133331  100380 main.go:141] libmachine: (ha-783738) Calling .GetMachineName
	I0217 11:57:09.133680  100380 main.go:141] libmachine: (ha-783738) Calling .GetIP
	I0217 11:57:09.136393  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.136746  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:09.136771  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.136918  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:09.139192  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.139545  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:09.139583  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.139699  100380 provision.go:143] copyHostCerts
	I0217 11:57:09.139734  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20427-77349/.minikube/ca.pem
	I0217 11:57:09.139786  100380 exec_runner.go:144] found /home/jenkins/minikube-integration/20427-77349/.minikube/ca.pem, removing ...
	I0217 11:57:09.139804  100380 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20427-77349/.minikube/ca.pem
	I0217 11:57:09.139883  100380 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20427-77349/.minikube/ca.pem (1082 bytes)
	I0217 11:57:09.139996  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20427-77349/.minikube/cert.pem
	I0217 11:57:09.140023  100380 exec_runner.go:144] found /home/jenkins/minikube-integration/20427-77349/.minikube/cert.pem, removing ...
	I0217 11:57:09.140030  100380 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20427-77349/.minikube/cert.pem
	I0217 11:57:09.140079  100380 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20427-77349/.minikube/cert.pem (1123 bytes)
	I0217 11:57:09.140159  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20427-77349/.minikube/key.pem
	I0217 11:57:09.140184  100380 exec_runner.go:144] found /home/jenkins/minikube-integration/20427-77349/.minikube/key.pem, removing ...
	I0217 11:57:09.140191  100380 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20427-77349/.minikube/key.pem
	I0217 11:57:09.140228  100380 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20427-77349/.minikube/key.pem (1675 bytes)
	I0217 11:57:09.140314  100380 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20427-77349/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca-key.pem org=jenkins.ha-783738 san=[127.0.0.1 192.168.39.249 ha-783738 localhost minikube]
	I0217 11:57:09.269804  100380 provision.go:177] copyRemoteCerts
	I0217 11:57:09.269900  100380 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0217 11:57:09.269935  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:09.272592  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.272916  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:09.272945  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.273095  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHPort
	I0217 11:57:09.273282  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:09.273464  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHUsername
	I0217 11:57:09.273600  100380 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738/id_rsa Username:docker}
	I0217 11:57:09.355256  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0217 11:57:09.355331  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0217 11:57:09.378132  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0217 11:57:09.378243  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0217 11:57:09.399749  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0217 11:57:09.399830  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0217 11:57:09.421183  100380 provision.go:87] duration metric: took 287.855291ms to configureAuth
	I0217 11:57:09.421207  100380 buildroot.go:189] setting minikube options for container-runtime
	I0217 11:57:09.421432  100380 config.go:182] Loaded profile config "ha-783738": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0217 11:57:09.421460  100380 main.go:141] libmachine: (ha-783738) Calling .DriverName
	I0217 11:57:09.421765  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:09.424701  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.425141  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:09.425173  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.425370  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHPort
	I0217 11:57:09.425557  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:09.425734  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:09.425883  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHUsername
	I0217 11:57:09.426059  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:09.426283  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0217 11:57:09.426297  100380 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0217 11:57:09.534976  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0217 11:57:09.535006  100380 buildroot.go:70] root file system type: tmpfs
	I0217 11:57:09.535125  100380 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0217 11:57:09.535163  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:09.537739  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.538108  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:09.538126  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.538307  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHPort
	I0217 11:57:09.538481  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:09.538662  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:09.538802  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHUsername
	I0217 11:57:09.538949  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:09.539142  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0217 11:57:09.539243  100380 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0217 11:57:09.658326  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0217 11:57:09.658371  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:09.661372  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.661838  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:09.661875  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.662085  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHPort
	I0217 11:57:09.662300  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:09.662435  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:09.662559  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHUsername
	I0217 11:57:09.662707  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:09.662897  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0217 11:57:09.662913  100380 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0217 11:57:11.588699  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0217 11:57:11.588766  100380 machine.go:96] duration metric: took 2.807616414s to provisionDockerMachine
	I0217 11:57:11.588782  100380 start.go:293] postStartSetup for "ha-783738" (driver="kvm2")
	I0217 11:57:11.588792  100380 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0217 11:57:11.588810  100380 main.go:141] libmachine: (ha-783738) Calling .DriverName
	I0217 11:57:11.589177  100380 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0217 11:57:11.589221  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:11.592192  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:11.592596  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:11.592627  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:11.592785  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHPort
	I0217 11:57:11.592979  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:11.593170  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHUsername
	I0217 11:57:11.593334  100380 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738/id_rsa Username:docker}
	I0217 11:57:11.675232  100380 ssh_runner.go:195] Run: cat /etc/os-release
	I0217 11:57:11.679319  100380 info.go:137] Remote host: Buildroot 2023.02.9
	I0217 11:57:11.679347  100380 filesync.go:126] Scanning /home/jenkins/minikube-integration/20427-77349/.minikube/addons for local assets ...
	I0217 11:57:11.679434  100380 filesync.go:126] Scanning /home/jenkins/minikube-integration/20427-77349/.minikube/files for local assets ...
	I0217 11:57:11.679553  100380 filesync.go:149] local asset: /home/jenkins/minikube-integration/20427-77349/.minikube/files/etc/ssl/certs/845022.pem -> 845022.pem in /etc/ssl/certs
	I0217 11:57:11.679569  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/files/etc/ssl/certs/845022.pem -> /etc/ssl/certs/845022.pem
	I0217 11:57:11.679700  100380 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0217 11:57:11.688596  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/files/etc/ssl/certs/845022.pem --> /etc/ssl/certs/845022.pem (1708 bytes)
	I0217 11:57:11.712948  100380 start.go:296] duration metric: took 124.147315ms for postStartSetup
	I0217 11:57:11.713041  100380 main.go:141] libmachine: (ha-783738) Calling .DriverName
	I0217 11:57:11.713388  100380 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0217 11:57:11.713431  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:11.716109  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:11.716482  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:11.716509  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:11.716697  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHPort
	I0217 11:57:11.716902  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:11.717111  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHUsername
	I0217 11:57:11.717237  100380 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738/id_rsa Username:docker}
	I0217 11:57:11.799568  100380 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0217 11:57:11.799647  100380 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0217 11:57:11.840659  100380 fix.go:56] duration metric: took 21.521561421s for fixHost
	I0217 11:57:11.840710  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:11.843711  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:11.844159  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:11.844211  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:11.844334  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHPort
	I0217 11:57:11.844538  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:11.844685  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:11.844877  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHUsername
	I0217 11:57:11.845064  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:11.845292  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0217 11:57:11.845324  100380 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0217 11:57:11.961693  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739793431.919777749
	
	I0217 11:57:11.961720  100380 fix.go:216] guest clock: 1739793431.919777749
	I0217 11:57:11.961728  100380 fix.go:229] Guest: 2025-02-17 11:57:11.919777749 +0000 UTC Remote: 2025-02-17 11:57:11.840688548 +0000 UTC m=+21.663425668 (delta=79.089201ms)
	I0217 11:57:11.961764  100380 fix.go:200] guest clock delta is within tolerance: 79.089201ms
	I0217 11:57:11.961771  100380 start.go:83] releasing machines lock for "ha-783738", held for 21.642703542s
	I0217 11:57:11.961797  100380 main.go:141] libmachine: (ha-783738) Calling .DriverName
	I0217 11:57:11.962076  100380 main.go:141] libmachine: (ha-783738) Calling .GetIP
	I0217 11:57:11.964739  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:11.965072  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:11.965098  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:11.965245  100380 main.go:141] libmachine: (ha-783738) Calling .DriverName
	I0217 11:57:11.965780  100380 main.go:141] libmachine: (ha-783738) Calling .DriverName
	I0217 11:57:11.965938  100380 main.go:141] libmachine: (ha-783738) Calling .DriverName
	I0217 11:57:11.966020  100380 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0217 11:57:11.966085  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:11.966153  100380 ssh_runner.go:195] Run: cat /version.json
	I0217 11:57:11.966182  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:11.968710  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:11.968814  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:11.969180  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:11.969211  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:11.969228  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:11.969243  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:11.969400  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHPort
	I0217 11:57:11.969505  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHPort
	I0217 11:57:11.969573  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:11.969654  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:11.969705  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHUsername
	I0217 11:57:11.969780  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHUsername
	I0217 11:57:11.969855  100380 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738/id_rsa Username:docker}
	I0217 11:57:11.969896  100380 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738/id_rsa Username:docker}
	I0217 11:57:12.070993  100380 ssh_runner.go:195] Run: systemctl --version
	I0217 11:57:12.076962  100380 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0217 11:57:12.082069  100380 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0217 11:57:12.082164  100380 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0217 11:57:12.097308  100380 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0217 11:57:12.097353  100380 start.go:495] detecting cgroup driver to use...
	I0217 11:57:12.097502  100380 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0217 11:57:12.116857  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0217 11:57:12.128177  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0217 11:57:12.139383  100380 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0217 11:57:12.139433  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0217 11:57:12.150535  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0217 11:57:12.161824  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0217 11:57:12.173075  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0217 11:57:12.184735  100380 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0217 11:57:12.196065  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0217 11:57:12.206061  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0217 11:57:12.215826  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0217 11:57:12.225719  100380 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0217 11:57:12.234589  100380 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0217 11:57:12.234644  100380 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0217 11:57:12.244581  100380 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0217 11:57:12.253602  100380 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0217 11:57:12.359116  100380 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0217 11:57:12.382906  100380 start.go:495] detecting cgroup driver to use...
	I0217 11:57:12.383010  100380 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0217 11:57:12.408300  100380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0217 11:57:12.424027  100380 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0217 11:57:12.444833  100380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0217 11:57:12.457628  100380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0217 11:57:12.470140  100380 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0217 11:57:12.497764  100380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0217 11:57:12.511071  100380 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0217 11:57:12.529141  100380 ssh_runner.go:195] Run: which cri-dockerd
	I0217 11:57:12.532846  100380 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0217 11:57:12.541895  100380 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0217 11:57:12.557198  100380 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0217 11:57:12.670128  100380 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0217 11:57:12.796263  100380 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0217 11:57:12.796399  100380 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0217 11:57:12.812229  100380 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0217 11:57:12.923350  100380 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0217 11:57:15.351609  100380 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.428206669s)
	I0217 11:57:15.351699  100380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0217 11:57:15.364852  100380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0217 11:57:15.377423  100380 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0217 11:57:15.493635  100380 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0217 11:57:15.621524  100380 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0217 11:57:15.730858  100380 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0217 11:57:15.748138  100380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0217 11:57:15.761818  100380 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0217 11:57:15.881775  100380 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0217 11:57:15.960772  100380 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0217 11:57:15.960858  100380 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0217 11:57:15.966411  100380 start.go:563] Will wait 60s for crictl version
	I0217 11:57:15.966517  100380 ssh_runner.go:195] Run: which crictl
	I0217 11:57:15.974036  100380 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0217 11:57:16.011837  100380 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0217 11:57:16.011912  100380 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0217 11:57:16.036945  100380 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0217 11:57:16.060974  100380 out.go:235] * Preparing Kubernetes v1.32.1 on Docker 27.4.0 ...
	I0217 11:57:16.061031  100380 main.go:141] libmachine: (ha-783738) Calling .GetIP
	I0217 11:57:16.063810  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:16.064255  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:16.064298  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:16.064499  100380 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0217 11:57:16.068464  100380 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0217 11:57:16.080668  100380 kubeadm.go:883] updating cluster {Name:ha-783738 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-783738 Namespace:
default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.31 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.168 Port:0 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-
gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0217 11:57:16.080804  100380 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0217 11:57:16.080849  100380 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0217 11:57:16.098890  100380 docker.go:689] Got preloaded images: -- stdout --
	kindest/kindnetd:v20250214-acbabc1a
	registry.k8s.io/kube-apiserver:v1.32.1
	registry.k8s.io/kube-scheduler:v1.32.1
	registry.k8s.io/kube-controller-manager:v1.32.1
	registry.k8s.io/kube-proxy:v1.32.1
	ghcr.io/kube-vip/kube-vip:v0.8.9
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0217 11:57:16.098911  100380 docker.go:619] Images already preloaded, skipping extraction
	I0217 11:57:16.098974  100380 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0217 11:57:16.116506  100380 docker.go:689] Got preloaded images: -- stdout --
	kindest/kindnetd:v20250214-acbabc1a
	registry.k8s.io/kube-apiserver:v1.32.1
	registry.k8s.io/kube-scheduler:v1.32.1
	registry.k8s.io/kube-controller-manager:v1.32.1
	registry.k8s.io/kube-proxy:v1.32.1
	ghcr.io/kube-vip/kube-vip:v0.8.9
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0217 11:57:16.116540  100380 cache_images.go:84] Images are preloaded, skipping loading
	I0217 11:57:16.116556  100380 kubeadm.go:934] updating node { 192.168.39.249 8443 v1.32.1 docker true true} ...
	I0217 11:57:16.116703  100380 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-783738 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.249
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:ha-783738 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0217 11:57:16.116764  100380 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0217 11:57:16.164431  100380 cni.go:84] Creating CNI manager for ""
	I0217 11:57:16.164455  100380 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0217 11:57:16.164469  100380 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0217 11:57:16.164499  100380 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.249 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-783738 NodeName:ha-783738 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.249"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.249 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0217 11:57:16.164682  100380 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.249
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-783738"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.249"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.249"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0217 11:57:16.164704  100380 kube-vip.go:115] generating kube-vip config ...
	I0217 11:57:16.164766  100380 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0217 11:57:16.178981  100380 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0217 11:57:16.179102  100380 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.9
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0217 11:57:16.179161  100380 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0217 11:57:16.189237  100380 binaries.go:44] Found k8s binaries, skipping transfer
	I0217 11:57:16.189321  100380 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0217 11:57:16.198727  100380 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I0217 11:57:16.214787  100380 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0217 11:57:16.231014  100380 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I0217 11:57:16.246729  100380 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0217 11:57:16.261779  100380 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0217 11:57:16.265453  100380 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0217 11:57:16.276521  100380 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0217 11:57:16.384249  100380 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0217 11:57:16.401291  100380 certs.go:68] Setting up /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738 for IP: 192.168.39.249
	I0217 11:57:16.401328  100380 certs.go:194] generating shared ca certs ...
	I0217 11:57:16.401350  100380 certs.go:226] acquiring lock for ca certs: {Name:mk7093571229e43ae88bf2507ccc9fd2cd05388e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0217 11:57:16.401508  100380 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20427-77349/.minikube/ca.key
	I0217 11:57:16.401544  100380 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20427-77349/.minikube/proxy-client-ca.key
	I0217 11:57:16.401555  100380 certs.go:256] generating profile certs ...
	I0217 11:57:16.401635  100380 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/client.key
	I0217 11:57:16.401660  100380 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.key.1b1cbf3b
	I0217 11:57:16.401671  100380 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.crt.1b1cbf3b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.249 192.168.39.31 192.168.39.254]
	I0217 11:57:16.475033  100380 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.crt.1b1cbf3b ...
	I0217 11:57:16.475062  100380 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.crt.1b1cbf3b: {Name:mkcae1f9f128e66451afcd5b133e6826e9862cbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0217 11:57:16.475228  100380 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.key.1b1cbf3b ...
	I0217 11:57:16.475243  100380 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.key.1b1cbf3b: {Name:mk484c481609a3c2ed473dfecb8f5468118b1367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0217 11:57:16.475330  100380 certs.go:381] copying /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.crt.1b1cbf3b -> /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.crt
	I0217 11:57:16.475492  100380 certs.go:385] copying /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.key.1b1cbf3b -> /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.key
	I0217 11:57:16.475629  100380 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/proxy-client.key
	I0217 11:57:16.475644  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0217 11:57:16.475656  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0217 11:57:16.475671  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0217 11:57:16.475699  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0217 11:57:16.475714  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0217 11:57:16.475726  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0217 11:57:16.475737  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0217 11:57:16.475748  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0217 11:57:16.475800  100380 certs.go:484] found cert: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/84502.pem (1338 bytes)
	W0217 11:57:16.475831  100380 certs.go:480] ignoring /home/jenkins/minikube-integration/20427-77349/.minikube/certs/84502_empty.pem, impossibly tiny 0 bytes
	I0217 11:57:16.475839  100380 certs.go:484] found cert: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca-key.pem (1679 bytes)
	I0217 11:57:16.475861  100380 certs.go:484] found cert: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem (1082 bytes)
	I0217 11:57:16.475900  100380 certs.go:484] found cert: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/cert.pem (1123 bytes)
	I0217 11:57:16.475927  100380 certs.go:484] found cert: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/key.pem (1675 bytes)
	I0217 11:57:16.476002  100380 certs.go:484] found cert: /home/jenkins/minikube-integration/20427-77349/.minikube/files/etc/ssl/certs/845022.pem (1708 bytes)
	I0217 11:57:16.476031  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/84502.pem -> /usr/share/ca-certificates/84502.pem
	I0217 11:57:16.476046  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/files/etc/ssl/certs/845022.pem -> /usr/share/ca-certificates/845022.pem
	I0217 11:57:16.476058  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0217 11:57:16.476652  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0217 11:57:16.507138  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0217 11:57:16.534527  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0217 11:57:16.562922  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0217 11:57:16.587311  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0217 11:57:16.624087  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0217 11:57:16.662037  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0217 11:57:16.713619  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0217 11:57:16.756345  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/certs/84502.pem --> /usr/share/ca-certificates/84502.pem (1338 bytes)
	I0217 11:57:16.803520  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/files/etc/ssl/certs/845022.pem --> /usr/share/ca-certificates/845022.pem (1708 bytes)
	I0217 11:57:16.846879  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0217 11:57:16.920267  100380 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0217 11:57:16.950648  100380 ssh_runner.go:195] Run: openssl version
	I0217 11:57:16.958784  100380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84502.pem && ln -fs /usr/share/ca-certificates/84502.pem /etc/ssl/certs/84502.pem"
	I0217 11:57:16.987238  100380 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84502.pem
	I0217 11:57:16.994220  100380 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 17 11:42 /usr/share/ca-certificates/84502.pem
	I0217 11:57:16.994283  100380 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84502.pem
	I0217 11:57:17.016466  100380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84502.pem /etc/ssl/certs/51391683.0"
	I0217 11:57:17.039972  100380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/845022.pem && ln -fs /usr/share/ca-certificates/845022.pem /etc/ssl/certs/845022.pem"
	I0217 11:57:17.061818  100380 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/845022.pem
	I0217 11:57:17.068988  100380 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 17 11:42 /usr/share/ca-certificates/845022.pem
	I0217 11:57:17.069057  100380 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/845022.pem
	I0217 11:57:17.075953  100380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/845022.pem /etc/ssl/certs/3ec20f2e.0"
	I0217 11:57:17.094161  100380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0217 11:57:17.111313  100380 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0217 11:57:17.116268  100380 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 17 11:35 /usr/share/ca-certificates/minikubeCA.pem
	I0217 11:57:17.116335  100380 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0217 11:57:17.122743  100380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0217 11:57:17.141827  100380 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0217 11:57:17.146771  100380 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0217 11:57:17.158301  100380 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0217 11:57:17.170200  100380 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0217 11:57:17.177413  100380 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0217 11:57:17.186556  100380 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0217 11:57:17.193933  100380 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0217 11:57:17.203839  100380 kubeadm.go:392] StartCluster: {Name:ha-783738 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-783738 Namespace:def
ault APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.31 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.168 Port:0 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gad
get:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0217 11:57:17.204089  100380 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0217 11:57:17.225257  100380 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0217 11:57:17.236858  100380 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0217 11:57:17.236876  100380 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0217 11:57:17.236920  100380 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0217 11:57:17.246285  100380 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0217 11:57:17.246828  100380 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-783738" does not appear in /home/jenkins/minikube-integration/20427-77349/kubeconfig
	I0217 11:57:17.246986  100380 kubeconfig.go:62] /home/jenkins/minikube-integration/20427-77349/kubeconfig needs updating (will repair): [kubeconfig missing "ha-783738" cluster setting kubeconfig missing "ha-783738" context setting]
	I0217 11:57:17.247367  100380 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20427-77349/kubeconfig: {Name:mka23a5c17f10bb58374e83755a2ac6a44464e11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0217 11:57:17.247895  100380 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/20427-77349/kubeconfig
	I0217 11:57:17.248117  100380 kapi.go:59] client config for ha-783738: &rest.Config{Host:"https://192.168.39.249:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/client.crt", KeyFile:"/home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/client.key", CAFile:"/home/jenkins/minikube-integration/20427-77349/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24df700), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0217 11:57:17.248591  100380 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0217 11:57:17.248610  100380 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0217 11:57:17.248615  100380 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0217 11:57:17.248619  100380 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0217 11:57:17.248634  100380 cert_rotation.go:140] Starting client certificate rotation controller
	I0217 11:57:17.249054  100380 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0217 11:57:17.258029  100380 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.249
	I0217 11:57:17.258053  100380 kubeadm.go:597] duration metric: took 21.170416ms to restartPrimaryControlPlane
	I0217 11:57:17.258062  100380 kubeadm.go:394] duration metric: took 54.240079ms to StartCluster
	I0217 11:57:17.258077  100380 settings.go:142] acquiring lock: {Name:mkf730c657b1c2d5a481dbeb02dabe7dfa17f2d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0217 11:57:17.258150  100380 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20427-77349/kubeconfig
	I0217 11:57:17.258639  100380 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20427-77349/kubeconfig: {Name:mka23a5c17f10bb58374e83755a2ac6a44464e11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0217 11:57:17.258848  100380 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0217 11:57:17.258870  100380 start.go:241] waiting for startup goroutines ...
	I0217 11:57:17.258884  100380 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0217 11:57:17.259112  100380 config.go:182] Loaded profile config "ha-783738": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0217 11:57:17.261397  100380 out.go:177] * Enabled addons: 
	I0217 11:57:17.262668  100380 addons.go:514] duration metric: took 3.785415ms for enable addons: enabled=[]
	I0217 11:57:17.262703  100380 start.go:246] waiting for cluster config update ...
	I0217 11:57:17.262713  100380 start.go:255] writing updated cluster config ...
	I0217 11:57:17.264127  100380 out.go:201] 
	I0217 11:57:17.265577  100380 config.go:182] Loaded profile config "ha-783738": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0217 11:57:17.265703  100380 profile.go:143] Saving config to /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/config.json ...
	I0217 11:57:17.267570  100380 out.go:177] * Starting "ha-783738-m02" control-plane node in "ha-783738" cluster
	I0217 11:57:17.268921  100380 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0217 11:57:17.268950  100380 cache.go:56] Caching tarball of preloaded images
	I0217 11:57:17.269061  100380 preload.go:172] Found /home/jenkins/minikube-integration/20427-77349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0217 11:57:17.269074  100380 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on docker
	I0217 11:57:17.269250  100380 profile.go:143] Saving config to /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/config.json ...
	I0217 11:57:17.269484  100380 start.go:360] acquireMachinesLock for ha-783738-m02: {Name:mk05ba8323ae77ab7dcc14c378d65810d956fdc0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0217 11:57:17.269554  100380 start.go:364] duration metric: took 46.103µs to acquireMachinesLock for "ha-783738-m02"
	I0217 11:57:17.269576  100380 start.go:96] Skipping create...Using existing machine configuration
	I0217 11:57:17.269584  100380 fix.go:54] fixHost starting: m02
	I0217 11:57:17.269846  100380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0217 11:57:17.269891  100380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0217 11:57:17.284961  100380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45093
	I0217 11:57:17.285438  100380 main.go:141] libmachine: () Calling .GetVersion
	I0217 11:57:17.285964  100380 main.go:141] libmachine: Using API Version  1
	I0217 11:57:17.285991  100380 main.go:141] libmachine: () Calling .SetConfigRaw
	I0217 11:57:17.286358  100380 main.go:141] libmachine: () Calling .GetMachineName
	I0217 11:57:17.286562  100380 main.go:141] libmachine: (ha-783738-m02) Calling .DriverName
	I0217 11:57:17.286744  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetState
	I0217 11:57:17.288288  100380 fix.go:112] recreateIfNeeded on ha-783738-m02: state=Stopped err=<nil>
	I0217 11:57:17.288317  100380 main.go:141] libmachine: (ha-783738-m02) Calling .DriverName
	W0217 11:57:17.288473  100380 fix.go:138] unexpected machine state, will restart: <nil>
	I0217 11:57:17.290496  100380 out.go:177] * Restarting existing kvm2 VM for "ha-783738-m02" ...
	I0217 11:57:17.291737  100380 main.go:141] libmachine: (ha-783738-m02) Calling .Start
	I0217 11:57:17.291936  100380 main.go:141] libmachine: (ha-783738-m02) starting domain...
	I0217 11:57:17.291957  100380 main.go:141] libmachine: (ha-783738-m02) ensuring networks are active...
	I0217 11:57:17.292625  100380 main.go:141] libmachine: (ha-783738-m02) Ensuring network default is active
	I0217 11:57:17.292935  100380 main.go:141] libmachine: (ha-783738-m02) Ensuring network mk-ha-783738 is active
	I0217 11:57:17.293260  100380 main.go:141] libmachine: (ha-783738-m02) getting domain XML...
	I0217 11:57:17.293893  100380 main.go:141] libmachine: (ha-783738-m02) creating domain...
	I0217 11:57:18.506378  100380 main.go:141] libmachine: (ha-783738-m02) waiting for IP...
	I0217 11:57:18.507364  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:18.507881  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:18.507974  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:18.507878  100573 retry.go:31] will retry after 190.071186ms: waiting for domain to come up
	I0217 11:57:18.699203  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:18.699617  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:18.699682  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:18.699590  100573 retry.go:31] will retry after 254.022024ms: waiting for domain to come up
	I0217 11:57:18.955132  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:18.955578  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:18.955602  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:18.955533  100573 retry.go:31] will retry after 332.594264ms: waiting for domain to come up
	I0217 11:57:19.290041  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:19.290494  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:19.290519  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:19.290472  100573 retry.go:31] will retry after 550.484931ms: waiting for domain to come up
	I0217 11:57:19.842363  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:19.842844  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:19.842873  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:19.842822  100573 retry.go:31] will retry after 743.60757ms: waiting for domain to come up
	I0217 11:57:20.587667  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:20.588025  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:20.588058  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:20.587981  100573 retry.go:31] will retry after 701.750144ms: waiting for domain to come up
	I0217 11:57:21.290980  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:21.291500  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:21.291530  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:21.291445  100573 retry.go:31] will retry after 755.313925ms: waiting for domain to come up
	I0217 11:57:22.047876  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:22.048286  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:22.048318  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:22.048246  100573 retry.go:31] will retry after 1.338224716s: waiting for domain to come up
	I0217 11:57:23.388238  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:23.388759  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:23.388796  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:23.388727  100573 retry.go:31] will retry after 1.367661407s: waiting for domain to come up
	I0217 11:57:24.758376  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:24.758722  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:24.758764  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:24.758718  100573 retry.go:31] will retry after 2.08548116s: waiting for domain to come up
	I0217 11:57:26.846621  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:26.847150  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:26.847253  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:26.847166  100573 retry.go:31] will retry after 1.933968455s: waiting for domain to come up
	I0217 11:57:28.782369  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:28.782785  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:28.782815  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:28.782752  100573 retry.go:31] will retry after 3.162167749s: waiting for domain to come up
	I0217 11:57:31.947188  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:31.947578  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:31.947603  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:31.947545  100573 retry.go:31] will retry after 3.924986004s: waiting for domain to come up
	I0217 11:57:35.877102  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:35.877437  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has current primary IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:35.877460  100380 main.go:141] libmachine: (ha-783738-m02) found domain IP: 192.168.39.31
	I0217 11:57:35.877473  100380 main.go:141] libmachine: (ha-783738-m02) reserving static IP address...
	I0217 11:57:35.877915  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "ha-783738-m02", mac: "52:54:00:06:81:a2", ip: "192.168.39.31"} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:35.877942  100380 main.go:141] libmachine: (ha-783738-m02) DBG | skip adding static IP to network mk-ha-783738 - found existing host DHCP lease matching {name: "ha-783738-m02", mac: "52:54:00:06:81:a2", ip: "192.168.39.31"}
	I0217 11:57:35.877960  100380 main.go:141] libmachine: (ha-783738-m02) reserved static IP address 192.168.39.31 for domain ha-783738-m02
	I0217 11:57:35.877972  100380 main.go:141] libmachine: (ha-783738-m02) waiting for SSH...
	I0217 11:57:35.877983  100380 main.go:141] libmachine: (ha-783738-m02) DBG | Getting to WaitForSSH function...
	I0217 11:57:35.880382  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:35.880801  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:35.880830  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:35.880903  100380 main.go:141] libmachine: (ha-783738-m02) DBG | Using SSH client type: external
	I0217 11:57:35.880925  100380 main.go:141] libmachine: (ha-783738-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738-m02/id_rsa (-rw-------)
	I0217 11:57:35.880955  100380 main.go:141] libmachine: (ha-783738-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.31 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0217 11:57:35.880970  100380 main.go:141] libmachine: (ha-783738-m02) DBG | About to run SSH command:
	I0217 11:57:35.880982  100380 main.go:141] libmachine: (ha-783738-m02) DBG | exit 0
	I0217 11:57:36.005182  100380 main.go:141] libmachine: (ha-783738-m02) DBG | SSH cmd err, output: <nil>: 
	I0217 11:57:36.005527  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetConfigRaw
	I0217 11:57:36.006216  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetIP
	I0217 11:57:36.008704  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.009084  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:36.009118  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.009443  100380 profile.go:143] Saving config to /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/config.json ...
	I0217 11:57:36.009639  100380 machine.go:93] provisionDockerMachine start ...
	I0217 11:57:36.009657  100380 main.go:141] libmachine: (ha-783738-m02) Calling .DriverName
	I0217 11:57:36.009816  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	I0217 11:57:36.011849  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.012187  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:36.012218  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.012360  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHPort
	I0217 11:57:36.012557  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.012710  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.012836  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHUsername
	I0217 11:57:36.012947  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:36.013115  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0217 11:57:36.013130  100380 main.go:141] libmachine: About to run SSH command:
	hostname
	I0217 11:57:36.113056  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0217 11:57:36.113093  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetMachineName
	I0217 11:57:36.113376  100380 buildroot.go:166] provisioning hostname "ha-783738-m02"
	I0217 11:57:36.113403  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetMachineName
	I0217 11:57:36.113566  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	I0217 11:57:36.116233  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.116606  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:36.116634  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.116762  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHPort
	I0217 11:57:36.116907  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.117025  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.117242  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHUsername
	I0217 11:57:36.117464  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:36.117681  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0217 11:57:36.117699  100380 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-783738-m02 && echo "ha-783738-m02" | sudo tee /etc/hostname
	I0217 11:57:36.230628  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-783738-m02
	
	I0217 11:57:36.230670  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	I0217 11:57:36.233644  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.233991  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:36.234015  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.234196  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHPort
	I0217 11:57:36.234491  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.234686  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.234856  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHUsername
	I0217 11:57:36.235006  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:36.235194  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0217 11:57:36.235211  100380 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-783738-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-783738-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-783738-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0217 11:57:36.341290  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0217 11:57:36.341332  100380 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20427-77349/.minikube CaCertPath:/home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20427-77349/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20427-77349/.minikube}
	I0217 11:57:36.341348  100380 buildroot.go:174] setting up certificates
	I0217 11:57:36.341360  100380 provision.go:84] configureAuth start
	I0217 11:57:36.341373  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetMachineName
	I0217 11:57:36.341646  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetIP
	I0217 11:57:36.344453  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.344944  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:36.344981  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.345158  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	I0217 11:57:36.347416  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.347719  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:36.347744  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.347910  100380 provision.go:143] copyHostCerts
	I0217 11:57:36.347943  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20427-77349/.minikube/ca.pem
	I0217 11:57:36.347989  100380 exec_runner.go:144] found /home/jenkins/minikube-integration/20427-77349/.minikube/ca.pem, removing ...
	I0217 11:57:36.347999  100380 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20427-77349/.minikube/ca.pem
	I0217 11:57:36.348065  100380 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20427-77349/.minikube/ca.pem (1082 bytes)
	I0217 11:57:36.348156  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20427-77349/.minikube/cert.pem
	I0217 11:57:36.348190  100380 exec_runner.go:144] found /home/jenkins/minikube-integration/20427-77349/.minikube/cert.pem, removing ...
	I0217 11:57:36.348200  100380 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20427-77349/.minikube/cert.pem
	I0217 11:57:36.348229  100380 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20427-77349/.minikube/cert.pem (1123 bytes)
	I0217 11:57:36.348286  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20427-77349/.minikube/key.pem
	I0217 11:57:36.348310  100380 exec_runner.go:144] found /home/jenkins/minikube-integration/20427-77349/.minikube/key.pem, removing ...
	I0217 11:57:36.348320  100380 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20427-77349/.minikube/key.pem
	I0217 11:57:36.348347  100380 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20427-77349/.minikube/key.pem (1675 bytes)
	I0217 11:57:36.348413  100380 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20427-77349/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca-key.pem org=jenkins.ha-783738-m02 san=[127.0.0.1 192.168.39.31 ha-783738-m02 localhost minikube]
	I0217 11:57:36.476199  100380 provision.go:177] copyRemoteCerts
	I0217 11:57:36.476256  100380 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0217 11:57:36.476280  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	I0217 11:57:36.479126  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.479497  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:36.479529  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.479677  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHPort
	I0217 11:57:36.479868  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.480073  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHUsername
	I0217 11:57:36.480258  100380 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738-m02/id_rsa Username:docker}
	I0217 11:57:36.558954  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0217 11:57:36.559023  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0217 11:57:36.581755  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0217 11:57:36.581816  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0217 11:57:36.604328  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0217 11:57:36.604411  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0217 11:57:36.626183  100380 provision.go:87] duration metric: took 284.807453ms to configureAuth
	I0217 11:57:36.626219  100380 buildroot.go:189] setting minikube options for container-runtime
	I0217 11:57:36.626492  100380 config.go:182] Loaded profile config "ha-783738": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0217 11:57:36.626522  100380 main.go:141] libmachine: (ha-783738-m02) Calling .DriverName
	I0217 11:57:36.626768  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	I0217 11:57:36.629194  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.629569  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:36.629594  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.629740  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHPort
	I0217 11:57:36.629904  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.630077  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.630201  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHUsername
	I0217 11:57:36.630389  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:36.630601  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0217 11:57:36.630614  100380 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0217 11:57:36.730964  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0217 11:57:36.730995  100380 buildroot.go:70] root file system type: tmpfs
	I0217 11:57:36.731148  100380 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0217 11:57:36.731184  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	I0217 11:57:36.733718  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.734119  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:36.734150  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.734340  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHPort
	I0217 11:57:36.734539  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.734714  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.734847  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHUsername
	I0217 11:57:36.734986  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:36.735198  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0217 11:57:36.735304  100380 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.39.249"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0217 11:57:36.846599  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.39.249
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0217 11:57:36.846633  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	I0217 11:57:36.849370  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.849714  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:36.849733  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.849923  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHPort
	I0217 11:57:36.850116  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.850290  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.850443  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHUsername
	I0217 11:57:36.850608  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:36.850788  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0217 11:57:36.850805  100380 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0217 11:57:38.700010  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0217 11:57:38.700036  100380 machine.go:96] duration metric: took 2.690384734s to provisionDockerMachine
	I0217 11:57:38.700051  100380 start.go:293] postStartSetup for "ha-783738-m02" (driver="kvm2")
	I0217 11:57:38.700060  100380 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0217 11:57:38.700075  100380 main.go:141] libmachine: (ha-783738-m02) Calling .DriverName
	I0217 11:57:38.700389  100380 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0217 11:57:38.700425  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	I0217 11:57:38.703068  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:38.703435  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:38.703465  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:38.703605  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHPort
	I0217 11:57:38.703807  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:38.703952  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHUsername
	I0217 11:57:38.704102  100380 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738-m02/id_rsa Username:docker}
	I0217 11:57:38.783381  100380 ssh_runner.go:195] Run: cat /etc/os-release
	I0217 11:57:38.787188  100380 info.go:137] Remote host: Buildroot 2023.02.9
	I0217 11:57:38.787215  100380 filesync.go:126] Scanning /home/jenkins/minikube-integration/20427-77349/.minikube/addons for local assets ...
	I0217 11:57:38.787270  100380 filesync.go:126] Scanning /home/jenkins/minikube-integration/20427-77349/.minikube/files for local assets ...
	I0217 11:57:38.787341  100380 filesync.go:149] local asset: /home/jenkins/minikube-integration/20427-77349/.minikube/files/etc/ssl/certs/845022.pem -> 845022.pem in /etc/ssl/certs
	I0217 11:57:38.787352  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/files/etc/ssl/certs/845022.pem -> /etc/ssl/certs/845022.pem
	I0217 11:57:38.787430  100380 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0217 11:57:38.796091  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/files/etc/ssl/certs/845022.pem --> /etc/ssl/certs/845022.pem (1708 bytes)
	I0217 11:57:38.817716  100380 start.go:296] duration metric: took 117.649565ms for postStartSetup
	I0217 11:57:38.817759  100380 main.go:141] libmachine: (ha-783738-m02) Calling .DriverName
	I0217 11:57:38.818052  100380 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0217 11:57:38.818087  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	I0217 11:57:38.820354  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:38.820669  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:38.820694  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:38.820809  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHPort
	I0217 11:57:38.820978  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:38.821138  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHUsername
	I0217 11:57:38.821273  100380 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738-m02/id_rsa Username:docker}
	I0217 11:57:38.900214  100380 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0217 11:57:38.900294  100380 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0217 11:57:38.959273  100380 fix.go:56] duration metric: took 21.689681729s for fixHost
	I0217 11:57:38.959327  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	I0217 11:57:38.961853  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:38.962326  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:38.962364  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:38.962591  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHPort
	I0217 11:57:38.962788  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:38.962952  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:38.963062  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHUsername
	I0217 11:57:38.963238  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:38.963408  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0217 11:57:38.963419  100380 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0217 11:57:39.071315  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739793459.049434891
	
	I0217 11:57:39.071339  100380 fix.go:216] guest clock: 1739793459.049434891
	I0217 11:57:39.071349  100380 fix.go:229] Guest: 2025-02-17 11:57:39.049434891 +0000 UTC Remote: 2025-02-17 11:57:38.959302801 +0000 UTC m=+48.782039917 (delta=90.13209ms)
	I0217 11:57:39.071366  100380 fix.go:200] guest clock delta is within tolerance: 90.13209ms
	I0217 11:57:39.071371  100380 start.go:83] releasing machines lock for "ha-783738-m02", held for 21.801804436s
	I0217 11:57:39.071393  100380 main.go:141] libmachine: (ha-783738-m02) Calling .DriverName
	I0217 11:57:39.071600  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetIP
	I0217 11:57:39.074321  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:39.074707  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:39.074736  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:39.076949  100380 out.go:177] * Found network options:
	I0217 11:57:39.078428  100380 out.go:177]   - NO_PROXY=192.168.39.249
	W0217 11:57:39.079686  100380 proxy.go:119] fail to check proxy env: Error ip not in block
	I0217 11:57:39.079714  100380 main.go:141] libmachine: (ha-783738-m02) Calling .DriverName
	I0217 11:57:39.080218  100380 main.go:141] libmachine: (ha-783738-m02) Calling .DriverName
	I0217 11:57:39.080403  100380 main.go:141] libmachine: (ha-783738-m02) Calling .DriverName
	I0217 11:57:39.080510  100380 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0217 11:57:39.080551  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	W0217 11:57:39.080631  100380 proxy.go:119] fail to check proxy env: Error ip not in block
	I0217 11:57:39.080722  100380 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0217 11:57:39.080748  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	I0217 11:57:39.083432  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:39.083453  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:39.083887  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:39.083914  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:39.083933  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:39.083949  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:39.084264  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHPort
	I0217 11:57:39.084411  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:39.084597  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHUsername
	I0217 11:57:39.084609  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHPort
	I0217 11:57:39.084763  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:39.084784  100380 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738-m02/id_rsa Username:docker}
	I0217 11:57:39.084915  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHUsername
	I0217 11:57:39.085034  100380 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738-m02/id_rsa Username:docker}
	W0217 11:57:39.178061  100380 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0217 11:57:39.178137  100380 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0217 11:57:39.195964  100380 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0217 11:57:39.196001  100380 start.go:495] detecting cgroup driver to use...
	I0217 11:57:39.196148  100380 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0217 11:57:39.216666  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0217 11:57:39.226815  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0217 11:57:39.236611  100380 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0217 11:57:39.236669  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0217 11:57:39.246500  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0217 11:57:39.256691  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0217 11:57:39.266509  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0217 11:57:39.276231  100380 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0217 11:57:39.286298  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0217 11:57:39.296149  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0217 11:57:39.305984  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0217 11:57:39.315650  100380 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0217 11:57:39.324721  100380 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0217 11:57:39.324777  100380 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0217 11:57:39.334429  100380 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0217 11:57:39.343052  100380 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0217 11:57:39.458041  100380 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0217 11:57:39.483361  100380 start.go:495] detecting cgroup driver to use...
	I0217 11:57:39.483453  100380 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0217 11:57:39.501404  100380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0217 11:57:39.522545  100380 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0217 11:57:39.545214  100380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0217 11:57:39.557462  100380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0217 11:57:39.569445  100380 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0217 11:57:39.593668  100380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0217 11:57:39.606767  100380 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0217 11:57:39.623713  100380 ssh_runner.go:195] Run: which cri-dockerd
	I0217 11:57:39.627306  100380 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0217 11:57:39.635920  100380 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0217 11:57:39.651184  100380 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0217 11:57:39.767938  100380 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0217 11:57:39.884761  100380 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0217 11:57:39.884806  100380 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0217 11:57:39.900934  100380 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0217 11:57:40.013206  100380 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0217 11:58:41.088581  100380 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.075335279s)
	I0217 11:58:41.088680  100380 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0217 11:58:41.109373  100380 out.go:201] 
	W0217 11:58:41.110918  100380 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Feb 17 11:57:37 ha-783738-m02 systemd[1]: Starting Docker Application Container Engine...
	Feb 17 11:57:37 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:37.207555071Z" level=info msg="Starting up"
	Feb 17 11:57:37 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:37.208523706Z" level=info msg="containerd not running, starting managed containerd"
	Feb 17 11:57:37 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:37.209284365Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=499
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.234357473Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.253922324Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.254071326Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.254155313Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.254195097Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.254502645Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.254572700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.254826671Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.254880442Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.254926515Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.254965881Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.255209553Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.255502921Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.257578132Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.257723954Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.257912930Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.257960933Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.258214223Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.258292090Z" level=info msg="metadata content store policy set" policy=shared
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.262281766Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.262389757Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.262437193Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.262478052Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.262523730Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.262614966Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.262915194Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263049035Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263094390Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263137669Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263176270Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263217488Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263254710Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263292496Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263339613Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263377065Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263418085Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263453223Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263511094Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263549833Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263589341Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263631649Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263726157Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263766086Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263809930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263847665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263885358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263932212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263972615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264020660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264063975Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264103157Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264158305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264194401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264230305Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264327104Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264417123Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264457690Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264499822Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264534568Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264575047Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264616722Z" level=info msg="NRI interface is disabled by configuration."
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264938960Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.265032087Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.265091203Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.265132167Z" level=info msg="containerd successfully booted in 0.032037s"
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.237803305Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.295143778Z" level=info msg="Loading containers: start."
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.484051173Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.565431513Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.632528889Z" level=info msg="Loading containers: done."
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.653906274Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.653941707Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.653962858Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.654196375Z" level=info msg="Daemon has completed initialization"
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.676178691Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.676315120Z" level=info msg="API listen on [::]:2376"
	Feb 17 11:57:38 ha-783738-m02 systemd[1]: Started Docker Application Container Engine.
	Feb 17 11:57:40 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:40.005718953Z" level=info msg="Processing signal 'terminated'"
	Feb 17 11:57:40 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:40.007186879Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Feb 17 11:57:40 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:40.007378782Z" level=info msg="Daemon shutdown complete"
	Feb 17 11:57:40 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:40.007446197Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Feb 17 11:57:40 ha-783738-m02 systemd[1]: Stopping Docker Application Container Engine...
	Feb 17 11:57:40 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:40.008214930Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Feb 17 11:57:41 ha-783738-m02 systemd[1]: docker.service: Deactivated successfully.
	Feb 17 11:57:41 ha-783738-m02 systemd[1]: Stopped Docker Application Container Engine.
	Feb 17 11:57:41 ha-783738-m02 systemd[1]: Starting Docker Application Container Engine...
	Feb 17 11:57:41 ha-783738-m02 dockerd[1120]: time="2025-02-17T11:57:41.051838490Z" level=info msg="Starting up"
	Feb 17 11:58:41 ha-783738-m02 dockerd[1120]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Feb 17 11:58:41 ha-783738-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Feb 17 11:58:41 ha-783738-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Feb 17 11:58:41 ha-783738-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0217 11:58:41.110964  100380 out.go:270] * 
	W0217 11:58:41.111815  100380 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0217 11:58:41.113412  100380 out.go:201] 
	
	
	==> Docker <==
	Feb 17 11:57:23 ha-783738 dockerd[1134]: time="2025-02-17T11:57:23.574956613Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 17 11:57:44 ha-783738 dockerd[1126]: time="2025-02-17T11:57:44.652472286Z" level=info msg="ignoring event" container=0eab009d1fe54d541fe5b166302e5af1a153e8aa37ad6a133704c1f40918f7c9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 17 11:57:44 ha-783738 dockerd[1134]: time="2025-02-17T11:57:44.653058320Z" level=info msg="shim disconnected" id=0eab009d1fe54d541fe5b166302e5af1a153e8aa37ad6a133704c1f40918f7c9 namespace=moby
	Feb 17 11:57:44 ha-783738 dockerd[1134]: time="2025-02-17T11:57:44.653483834Z" level=warning msg="cleaning up after shim disconnected" id=0eab009d1fe54d541fe5b166302e5af1a153e8aa37ad6a133704c1f40918f7c9 namespace=moby
	Feb 17 11:57:44 ha-783738 dockerd[1134]: time="2025-02-17T11:57:44.653545740Z" level=info msg="cleaning up dead shim" namespace=moby
	Feb 17 11:57:45 ha-783738 dockerd[1126]: time="2025-02-17T11:57:45.663576348Z" level=info msg="ignoring event" container=1683ded4f12ef91eea7067f33248f5185b17f0532a1c1480efe277bcd8accfe6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 17 11:57:45 ha-783738 dockerd[1134]: time="2025-02-17T11:57:45.664110377Z" level=info msg="shim disconnected" id=1683ded4f12ef91eea7067f33248f5185b17f0532a1c1480efe277bcd8accfe6 namespace=moby
	Feb 17 11:57:45 ha-783738 dockerd[1134]: time="2025-02-17T11:57:45.664165013Z" level=warning msg="cleaning up after shim disconnected" id=1683ded4f12ef91eea7067f33248f5185b17f0532a1c1480efe277bcd8accfe6 namespace=moby
	Feb 17 11:57:45 ha-783738 dockerd[1134]: time="2025-02-17T11:57:45.664175956Z" level=info msg="cleaning up dead shim" namespace=moby
	Feb 17 11:58:04 ha-783738 dockerd[1134]: time="2025-02-17T11:58:04.854960498Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 17 11:58:04 ha-783738 dockerd[1134]: time="2025-02-17T11:58:04.855123802Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 17 11:58:04 ha-783738 dockerd[1134]: time="2025-02-17T11:58:04.855151191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 17 11:58:04 ha-783738 dockerd[1134]: time="2025-02-17T11:58:04.855373177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 17 11:58:04 ha-783738 dockerd[1134]: time="2025-02-17T11:58:04.858152322Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 17 11:58:04 ha-783738 dockerd[1134]: time="2025-02-17T11:58:04.858222102Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 17 11:58:04 ha-783738 dockerd[1134]: time="2025-02-17T11:58:04.858232103Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 17 11:58:04 ha-783738 dockerd[1134]: time="2025-02-17T11:58:04.858372930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 17 11:58:25 ha-783738 dockerd[1126]: time="2025-02-17T11:58:25.325613613Z" level=info msg="ignoring event" container=0d8dd6abc6b0262f0e2de062685df6bbc87187dd14023d0fd12b894f48bd2001 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 17 11:58:25 ha-783738 dockerd[1134]: time="2025-02-17T11:58:25.326644755Z" level=info msg="shim disconnected" id=0d8dd6abc6b0262f0e2de062685df6bbc87187dd14023d0fd12b894f48bd2001 namespace=moby
	Feb 17 11:58:25 ha-783738 dockerd[1134]: time="2025-02-17T11:58:25.326737271Z" level=warning msg="cleaning up after shim disconnected" id=0d8dd6abc6b0262f0e2de062685df6bbc87187dd14023d0fd12b894f48bd2001 namespace=moby
	Feb 17 11:58:25 ha-783738 dockerd[1134]: time="2025-02-17T11:58:25.326756884Z" level=info msg="cleaning up dead shim" namespace=moby
	Feb 17 11:58:26 ha-783738 dockerd[1126]: time="2025-02-17T11:58:26.334899301Z" level=info msg="ignoring event" container=2e90f752fdc0601abb5401e228fa8355b97462cfd9f4dafb766f56eaf8e7b13a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 17 11:58:26 ha-783738 dockerd[1134]: time="2025-02-17T11:58:26.335703125Z" level=info msg="shim disconnected" id=2e90f752fdc0601abb5401e228fa8355b97462cfd9f4dafb766f56eaf8e7b13a namespace=moby
	Feb 17 11:58:26 ha-783738 dockerd[1134]: time="2025-02-17T11:58:26.335778773Z" level=warning msg="cleaning up after shim disconnected" id=2e90f752fdc0601abb5401e228fa8355b97462cfd9f4dafb766f56eaf8e7b13a namespace=moby
	Feb 17 11:58:26 ha-783738 dockerd[1134]: time="2025-02-17T11:58:26.335795547Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	2e90f752fdc06       019ee182b58e2       37 seconds ago       Exited              kube-controller-manager   4                   eeb1b6c34de35       kube-controller-manager-ha-783738
	0d8dd6abc6b02       95c0bda56fc4d       37 seconds ago       Exited              kube-apiserver            4                   a531c479908eb       kube-apiserver-ha-783738
	d524d25a3256e       2b0d6572d062c       About a minute ago   Running             kube-scheduler            2                   5633bc5aacc12       kube-scheduler-ha-783738
	2b8921c7d9f71       22f88dde2caa4       About a minute ago   Running             kube-vip                  1                   5f0329677cb70       kube-vip-ha-783738
	aeb757a6db075       a9e7e6b294baf       About a minute ago   Running             etcd                      2                   8c5c6a3fd0ba0       etcd-ha-783738
	8c236b02a8316       6e38f40d628db       4 minutes ago        Exited              storage-provisioner       3                   3b5478be91580       storage-provisioner
	f460be4118731       8c811b4aec35f       4 minutes ago        Exited              busybox                   1                   cd41205ee4990       busybox-58667487b6-mp8w2
	5caaef1da4142       e29f9c7391fd9       4 minutes ago        Exited              kube-proxy                1                   3bada7fe972b9       kube-proxy-pgwb4
	95f567924c5ee       c69fa2e9cbf5f       4 minutes ago        Exited              coredns                   1                   33c8d49183b1a       coredns-668d6bf9bc-bhrvt
	b4ccb469b39af       df3849d954c98       4 minutes ago        Exited              kindnet-cni               1                   bba5ce66a15dd       kindnet-t72ln
	b674f5b7afb38       c69fa2e9cbf5f       4 minutes ago        Exited              coredns                   1                   bfd8d387b7e96       coredns-668d6bf9bc-k5k72
	1395373a3c212       2b0d6572d062c       5 minutes ago        Exited              kube-scheduler            1                   fe3b7022472a7       kube-scheduler-ha-783738
	0644596c7e815       a9e7e6b294baf       5 minutes ago        Exited              etcd                      1                   a79f0d4414c0a       etcd-ha-783738
	905fe651f5a2d       22f88dde2caa4       5 minutes ago        Exited              kube-vip                  0                   6e727a24edb43       kube-vip-ha-783738
	
	
	==> coredns [95f567924c5e] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:54083 - 5538 "HINFO IN 6952713337195609451.67698316276633629. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.046526479s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[586752551]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Feb-2025 11:53:59.037) (total time: 30004ms):
	Trace[586752551]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30004ms (11:54:29.042)
	Trace[586752551]: [30.004932204s] [30.004932204s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[31748474]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Feb-2025 11:53:59.037) (total time: 30005ms):
	Trace[31748474]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30005ms (11:54:29.043)
	Trace[31748474]: [30.005260877s] [30.005260877s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1254162758]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Feb-2025 11:53:59.043) (total time: 30000ms):
	Trace[1254162758]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (11:54:29.044)
	Trace[1254162758]: [30.000938039s] [30.000938039s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b674f5b7afb3] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:47652 - 30454 "HINFO IN 3233588620932119307.6917908993167898246. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.026177844s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1310151553]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Feb-2025 11:53:59.042) (total time: 30001ms):
	Trace[1310151553]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (11:54:29.043)
	Trace[1310151553]: [30.001216976s] [30.001216976s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1951418715]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Feb-2025 11:53:59.039) (total time: 30005ms):
	Trace[1951418715]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30005ms (11:54:29.044)
	Trace[1951418715]: [30.005382964s] [30.005382964s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[606941673]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Feb-2025 11:53:59.038) (total time: 30006ms):
	Trace[606941673]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30006ms (11:54:29.044)
	Trace[606941673]: [30.006431575s] [30.006431575s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0217 11:58:41.991338    2717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0217 11:58:41.993795    2717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0217 11:58:41.995188    2717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0217 11:58:41.996646    2717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0217 11:58:41.998266    2717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Feb17 11:56] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052638] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037697] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.851026] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.992141] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[Feb17 11:57] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.664405] systemd-fstab-generator[476]: Ignoring "noauto" option for root device
	[  +0.058988] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058916] systemd-fstab-generator[488]: Ignoring "noauto" option for root device
	[  +2.348725] systemd-fstab-generator[1055]: Ignoring "noauto" option for root device
	[  +0.313948] systemd-fstab-generator[1092]: Ignoring "noauto" option for root device
	[  +0.110900] systemd-fstab-generator[1104]: Ignoring "noauto" option for root device
	[  +0.140552] systemd-fstab-generator[1118]: Ignoring "noauto" option for root device
	[  +2.263360] kauditd_printk_skb: 199 callbacks suppressed
	[  +0.301992] systemd-fstab-generator[1377]: Ignoring "noauto" option for root device
	[  +0.125509] systemd-fstab-generator[1390]: Ignoring "noauto" option for root device
	[  +0.118202] systemd-fstab-generator[1402]: Ignoring "noauto" option for root device
	[  +0.144218] systemd-fstab-generator[1417]: Ignoring "noauto" option for root device
	[  +0.508597] systemd-fstab-generator[1584]: Ignoring "noauto" option for root device
	[  +6.843964] kauditd_printk_skb: 180 callbacks suppressed
	[  +8.294455] kauditd_printk_skb: 40 callbacks suppressed
	
	
	==> etcd [0644596c7e81] <==
	{"level":"warn","ts":"2025-02-17T11:56:37.953386Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"416.799075ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/\" range_end:\"/registry/deployments0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2025-02-17T11:56:37.953402Z","caller":"traceutil/trace.go:171","msg":"trace[234534568] range","detail":"{range_begin:/registry/deployments/; range_end:/registry/deployments0; }","duration":"416.832899ms","start":"2025-02-17T11:56:37.536564Z","end":"2025-02-17T11:56:37.953396Z","steps":["trace[234534568] 'agreement among raft nodes before linearized reading'  (duration: 416.815476ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-17T11:56:37.953416Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-17T11:56:37.536510Z","time spent":"416.902435ms","remote":"127.0.0.1:58532","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":0,"response size":0,"request content":"key:\"/registry/deployments/\" range_end:\"/registry/deployments0\" count_only:true "}
	2025/02/17 11:56:37 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2025-02-17T11:56:37.953469Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.057072714s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"","error":"context canceled"}
	{"level":"info","ts":"2025-02-17T11:56:37.953479Z","caller":"traceutil/trace.go:171","msg":"trace[2020420396] range","detail":"{range_begin:/registry/health; range_end:; }","duration":"1.057490424s","start":"2025-02-17T11:56:36.895986Z","end":"2025-02-17T11:56:37.953476Z","steps":["trace[2020420396] 'agreement among raft nodes before linearized reading'  (duration: 1.057479846s)"],"step_count":1}
	{"level":"warn","ts":"2025-02-17T11:56:37.953491Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-17T11:56:36.895975Z","time spent":"1.057513489s","remote":"127.0.0.1:58120","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":0,"request content":"key:\"/registry/health\" "}
	2025/02/17 11:56:37 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2025-02-17T11:56:37.953557Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.889027766s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/podtemplates/\" range_end:\"/registry/podtemplates0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2025-02-17T11:56:37.953567Z","caller":"traceutil/trace.go:171","msg":"trace[159538693] range","detail":"{range_begin:/registry/podtemplates/; range_end:/registry/podtemplates0; }","duration":"1.889056203s","start":"2025-02-17T11:56:36.064508Z","end":"2025-02-17T11:56:37.953564Z","steps":["trace[159538693] 'agreement among raft nodes before linearized reading'  (duration: 1.88904446s)"],"step_count":1}
	{"level":"warn","ts":"2025-02-17T11:56:37.953580Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-17T11:56:36.064496Z","time spent":"1.889079683s","remote":"127.0.0.1:58254","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":0,"response size":0,"request content":"key:\"/registry/podtemplates/\" range_end:\"/registry/podtemplates0\" count_only:true "}
	2025/02/17 11:56:37 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2025-02-17T11:56:38.012328Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.249:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-02-17T11:56:38.012367Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.249:2379: use of closed network connection"}
	{"level":"info","ts":"2025-02-17T11:56:38.012413Z","caller":"etcdserver/server.go:1534","msg":"skipped leadership transfer; local server is not leader","local-member-id":"318ee90c3446d547","current-leader-member-id":"0"}
	{"level":"info","ts":"2025-02-17T11:56:38.012793Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"645ac05e9f2d470a"}
	{"level":"info","ts":"2025-02-17T11:56:38.012892Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"645ac05e9f2d470a"}
	{"level":"info","ts":"2025-02-17T11:56:38.012915Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"645ac05e9f2d470a"}
	{"level":"info","ts":"2025-02-17T11:56:38.012991Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"318ee90c3446d547","remote-peer-id":"645ac05e9f2d470a"}
	{"level":"info","ts":"2025-02-17T11:56:38.013022Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"318ee90c3446d547","remote-peer-id":"645ac05e9f2d470a"}
	{"level":"info","ts":"2025-02-17T11:56:38.013134Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"318ee90c3446d547","remote-peer-id":"645ac05e9f2d470a"}
	{"level":"info","ts":"2025-02-17T11:56:38.013145Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"645ac05e9f2d470a"}
	{"level":"info","ts":"2025-02-17T11:56:38.016636Z","caller":"embed/etcd.go:582","msg":"stopping serving peer traffic","address":"192.168.39.249:2380"}
	{"level":"info","ts":"2025-02-17T11:56:38.016720Z","caller":"embed/etcd.go:587","msg":"stopped serving peer traffic","address":"192.168.39.249:2380"}
	{"level":"info","ts":"2025-02-17T11:56:38.016728Z","caller":"embed/etcd.go:380","msg":"closed etcd server","name":"ha-783738","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.249:2380"],"advertise-client-urls":["https://192.168.39.249:2379"]}
	
	
	==> etcd [aeb757a6db07] <==
	{"level":"info","ts":"2025-02-17T11:58:37.637100Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 received MsgPreVoteResp from 318ee90c3446d547 at term 3"}
	{"level":"info","ts":"2025-02-17T11:58:37.637132Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 [logterm: 3, index: 3030] sent MsgPreVote request to 645ac05e9f2d470a at term 3"}
	{"level":"warn","ts":"2025-02-17T11:58:37.832695Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15368416165570069265,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-02-17T11:58:38.333313Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15368416165570069265,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-02-17T11:58:38.833992Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15368416165570069265,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-02-17T11:58:39.105914Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"645ac05e9f2d470a","rtt":"0s","error":"dial tcp 192.168.39.31:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-02-17T11:58:39.106133Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"645ac05e9f2d470a","rtt":"0s","error":"dial tcp 192.168.39.31:2380: connect: connection refused"}
	{"level":"info","ts":"2025-02-17T11:58:39.236323Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 is starting a new election at term 3"}
	{"level":"info","ts":"2025-02-17T11:58:39.236529Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 became pre-candidate at term 3"}
	{"level":"info","ts":"2025-02-17T11:58:39.236639Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 received MsgPreVoteResp from 318ee90c3446d547 at term 3"}
	{"level":"info","ts":"2025-02-17T11:58:39.236682Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 [logterm: 3, index: 3030] sent MsgPreVote request to 645ac05e9f2d470a at term 3"}
	{"level":"warn","ts":"2025-02-17T11:58:39.334913Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15368416165570069265,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-02-17T11:58:39.836002Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15368416165570069265,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-02-17T11:58:40.336905Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15368416165570069265,"retry-timeout":"500ms"}
	{"level":"info","ts":"2025-02-17T11:58:40.836559Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 is starting a new election at term 3"}
	{"level":"info","ts":"2025-02-17T11:58:40.836692Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 became pre-candidate at term 3"}
	{"level":"info","ts":"2025-02-17T11:58:40.836729Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 received MsgPreVoteResp from 318ee90c3446d547 at term 3"}
	{"level":"info","ts":"2025-02-17T11:58:40.836762Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 [logterm: 3, index: 3030] sent MsgPreVote request to 645ac05e9f2d470a at term 3"}
	{"level":"warn","ts":"2025-02-17T11:58:40.837045Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15368416165570069265,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-02-17T11:58:41.084143Z","caller":"etcdserver/server.go:2161","msg":"failed to publish local member to cluster through raft","local-member-id":"318ee90c3446d547","local-member-attributes":"{Name:ha-783738 ClientURLs:[https://192.168.39.249:2379]}","request-path":"/0/members/318ee90c3446d547/attributes","publish-timeout":"7s","error":"etcdserver: request timed out"}
	{"level":"warn","ts":"2025-02-17T11:58:41.337434Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15368416165570069265,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-02-17T11:58:41.827365Z","caller":"etcdserver/v3_server.go:932","msg":"timed out waiting for read index response (local node might have slow network)","timeout":"7s"}
	{"level":"warn","ts":"2025-02-17T11:58:41.827445Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"7.000504247s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: request timed out"}
	{"level":"info","ts":"2025-02-17T11:58:41.827469Z","caller":"traceutil/trace.go:171","msg":"trace[1958910963] range","detail":"{range_begin:; range_end:; }","duration":"7.000551306s","start":"2025-02-17T11:58:34.826907Z","end":"2025-02-17T11:58:41.827459Z","steps":["trace[1958910963] 'agreement among raft nodes before linearized reading'  (duration: 7.000502454s)"],"step_count":1}
	{"level":"error","ts":"2025-02-17T11:58:41.827501Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: request timed out\n[+]data_corruption ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2171\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2688\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:3142\nnet/http.(*conn).serve\n\tnet/http/server.go:2044"}
	
	
	==> kernel <==
	 11:58:42 up 1 min,  0 users,  load average: 0.47, 0.29, 0.11
	Linux ha-783738 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [b4ccb469b39a] <==
	I0217 11:56:00.000922       1 main.go:324] Node ha-783738-m02 has CIDR [10.244.1.0/24] 
	I0217 11:56:00.001386       1 main.go:297] Handling node with IPs: map[192.168.39.216:{}]
	I0217 11:56:00.001417       1 main.go:324] Node ha-783738-m03 has CIDR [10.244.2.0/24] 
	I0217 11:56:00.002870       1 main.go:297] Handling node with IPs: map[192.168.39.168:{}]
	I0217 11:56:00.003089       1 main.go:324] Node ha-783738-m04 has CIDR [10.244.3.0/24] 
	I0217 11:56:10.003758       1 main.go:297] Handling node with IPs: map[192.168.39.31:{}]
	I0217 11:56:10.004120       1 main.go:324] Node ha-783738-m02 has CIDR [10.244.1.0/24] 
	I0217 11:56:10.004466       1 main.go:297] Handling node with IPs: map[192.168.39.216:{}]
	I0217 11:56:10.004579       1 main.go:324] Node ha-783738-m03 has CIDR [10.244.2.0/24] 
	I0217 11:56:10.004848       1 main.go:297] Handling node with IPs: map[192.168.39.168:{}]
	I0217 11:56:10.004993       1 main.go:324] Node ha-783738-m04 has CIDR [10.244.3.0/24] 
	I0217 11:56:10.005322       1 main.go:297] Handling node with IPs: map[192.168.39.249:{}]
	I0217 11:56:10.005440       1 main.go:301] handling current node
	I0217 11:56:20.008868       1 main.go:297] Handling node with IPs: map[192.168.39.249:{}]
	I0217 11:56:20.008992       1 main.go:301] handling current node
	I0217 11:56:20.009032       1 main.go:297] Handling node with IPs: map[192.168.39.31:{}]
	I0217 11:56:20.009107       1 main.go:324] Node ha-783738-m02 has CIDR [10.244.1.0/24] 
	I0217 11:56:20.009351       1 main.go:297] Handling node with IPs: map[192.168.39.168:{}]
	I0217 11:56:20.009426       1 main.go:324] Node ha-783738-m04 has CIDR [10.244.3.0/24] 
	I0217 11:56:30.000205       1 main.go:297] Handling node with IPs: map[192.168.39.168:{}]
	I0217 11:56:30.000320       1 main.go:324] Node ha-783738-m04 has CIDR [10.244.3.0/24] 
	I0217 11:56:30.000673       1 main.go:297] Handling node with IPs: map[192.168.39.249:{}]
	I0217 11:56:30.004120       1 main.go:301] handling current node
	I0217 11:56:30.004403       1 main.go:297] Handling node with IPs: map[192.168.39.31:{}]
	I0217 11:56:30.004484       1 main.go:324] Node ha-783738-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [0d8dd6abc6b0] <==
	W0217 11:58:05.008746       1 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags
	I0217 11:58:05.009254       1 options.go:238] external host was not specified, using 192.168.39.249
	I0217 11:58:05.012100       1 server.go:143] Version: v1.32.1
	I0217 11:58:05.012139       1 server.go:145] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0217 11:58:05.254592       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0217 11:58:05.265931       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0217 11:58:05.302917       1 plugins.go:157] Loaded 13 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I0217 11:58:05.302958       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0217 11:58:05.303380       1 instance.go:233] Using reconciler: lease
	W0217 11:58:25.253372       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0217 11:58:25.253478       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0217 11:58:25.304453       1 instance.go:226] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [2e90f752fdc0] <==
	I0217 11:58:05.575513       1 serving.go:386] Generated self-signed cert in-memory
	I0217 11:58:05.850219       1 controllermanager.go:185] "Starting" version="v1.32.1"
	I0217 11:58:05.850380       1 controllermanager.go:187] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0217 11:58:05.851835       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0217 11:58:05.852508       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0217 11:58:05.852713       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0217 11:58:05.852833       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0217 11:58:26.312388       1 controllermanager.go:230] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.249:8443/healthz\": dial tcp 192.168.39.249:8443: connect: connection refused"
	
	
	==> kube-proxy [5caaef1da414] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0217 11:53:59.616708       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0217 11:53:59.651486       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.249"]
	E0217 11:53:59.651650       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0217 11:53:59.696326       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0217 11:53:59.696377       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0217 11:53:59.696401       1 server_linux.go:170] "Using iptables Proxier"
	I0217 11:53:59.710221       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0217 11:53:59.711347       1 server.go:497] "Version info" version="v1.32.1"
	I0217 11:53:59.711380       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0217 11:53:59.716398       1 config.go:199] "Starting service config controller"
	I0217 11:53:59.717714       1 config.go:105] "Starting endpoint slice config controller"
	I0217 11:53:59.717746       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0217 11:53:59.718142       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0217 11:53:59.718615       1 config.go:329] "Starting node config controller"
	I0217 11:53:59.718758       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0217 11:53:59.817915       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0217 11:53:59.819456       1 shared_informer.go:320] Caches are synced for service config
	I0217 11:53:59.821373       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1395373a3c21] <==
	E0217 11:53:52.919534       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.249:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0217 11:53:53.771964       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.249:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0217 11:53:53.772105       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.249:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0217 11:53:55.316775       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.249:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0217 11:53:55.316841       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.249:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0217 11:53:55.317229       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.249:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0217 11:53:55.317287       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.249:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0217 11:53:55.599247       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.249:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0217 11:53:55.599332       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.249:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0217 11:53:55.855471       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.249:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0217 11:53:55.855524       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.249:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0217 11:53:56.059180       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.249:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0217 11:53:56.059238       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.249:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0217 11:53:59.073926       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0217 11:53:59.074031       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0217 11:53:59.074570       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0217 11:53:59.075126       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0217 11:53:59.075450       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0217 11:53:59.074624       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0217 11:54:13.896773       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0217 11:56:05.957670       1 framework.go:1316] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-58667487b6-v7x5t\": pod busybox-58667487b6-v7x5t is already assigned to node \"ha-783738-m04\"" plugin="DefaultBinder" pod="default/busybox-58667487b6-v7x5t" node="ha-783738-m04"
	E0217 11:56:05.971236       1 schedule_one.go:359] "scheduler cache ForgetPod failed" err="pod c5148a30-9b13-42ed-87c8-723413b074d3(default/busybox-58667487b6-v7x5t) wasn't assumed so cannot be forgotten" pod="default/busybox-58667487b6-v7x5t"
	E0217 11:56:05.971303       1 schedule_one.go:1058] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-58667487b6-v7x5t\": pod busybox-58667487b6-v7x5t is already assigned to node \"ha-783738-m04\"" pod="default/busybox-58667487b6-v7x5t"
	I0217 11:56:05.971509       1 schedule_one.go:1071] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-58667487b6-v7x5t" node="ha-783738-m04"
	E0217 11:56:37.999387       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [d524d25a3256] <==
	E0217 11:58:26.313559       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.249:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.249:37922->192.168.39.249:8443: read: connection reset by peer" logger="UnhandledError"
	W0217 11:58:26.313700       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.249:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.249:37926->192.168.39.249:8443: read: connection reset by peer
	E0217 11:58:26.313773       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.249:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.249:37926->192.168.39.249:8443: read: connection reset by peer" logger="UnhandledError"
	W0217 11:58:26.313906       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.249:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.249:37956->192.168.39.249:8443: read: connection reset by peer
	E0217 11:58:26.313971       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.249:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.249:37956->192.168.39.249:8443: read: connection reset by peer" logger="UnhandledError"
	W0217 11:58:26.314101       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.249:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.249:37960->192.168.39.249:8443: read: connection reset by peer
	E0217 11:58:26.314185       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.249:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.249:37960->192.168.39.249:8443: read: connection reset by peer" logger="UnhandledError"
	W0217 11:58:26.314462       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.249:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.249:37888->192.168.39.249:8443: read: connection reset by peer
	E0217 11:58:26.314547       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.249:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.249:37888->192.168.39.249:8443: read: connection reset by peer" logger="UnhandledError"
	W0217 11:58:26.314713       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.249:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.249:37930->192.168.39.249:8443: read: connection reset by peer
	E0217 11:58:26.314798       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.249:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.249:37930->192.168.39.249:8443: read: connection reset by peer" logger="UnhandledError"
	W0217 11:58:26.314960       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.249:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.249:37948->192.168.39.249:8443: read: connection reset by peer
	E0217 11:58:26.315166       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.249:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.249:37948->192.168.39.249:8443: read: connection reset by peer" logger="UnhandledError"
	W0217 11:58:26.315243       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: Get "https://192.168.39.249:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.249:37940->192.168.39.249:8443: read: connection reset by peer
	E0217 11:58:26.315352       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.249:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.249:37940->192.168.39.249:8443: read: connection reset by peer" logger="UnhandledError"
	W0217 11:58:29.432094       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.249:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0217 11:58:29.432235       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.249:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0217 11:58:32.758441       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.249:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0217 11:58:32.758583       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.249:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0217 11:58:33.069242       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.249:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0217 11:58:33.069380       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.249:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0217 11:58:35.727701       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.249:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0217 11:58:35.727922       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.249:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0217 11:58:36.974377       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.249:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0217 11:58:36.974419       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.249:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	
	
	==> kubelet <==
	Feb 17 11:58:26 ha-783738 kubelet[1591]: I0217 11:58:26.486508    1591 scope.go:117] "RemoveContainer" containerID="1683ded4f12ef91eea7067f33248f5185b17f0532a1c1480efe277bcd8accfe6"
	Feb 17 11:58:26 ha-783738 kubelet[1591]: E0217 11:58:26.487506    1591 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-783738\" not found" node="ha-783738"
	Feb 17 11:58:26 ha-783738 kubelet[1591]: I0217 11:58:26.487581    1591 scope.go:117] "RemoveContainer" containerID="2e90f752fdc0601abb5401e228fa8355b97462cfd9f4dafb766f56eaf8e7b13a"
	Feb 17 11:58:26 ha-783738 kubelet[1591]: E0217 11:58:26.487721    1591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-783738_kube-system(37cb2af166ca362ca24afd5a80241d47)\"" pod="kube-system/kube-controller-manager-ha-783738" podUID="37cb2af166ca362ca24afd5a80241d47"
	Feb 17 11:58:26 ha-783738 kubelet[1591]: E0217 11:58:26.495193    1591 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-783738\" not found" node="ha-783738"
	Feb 17 11:58:26 ha-783738 kubelet[1591]: I0217 11:58:26.495253    1591 scope.go:117] "RemoveContainer" containerID="0d8dd6abc6b0262f0e2de062685df6bbc87187dd14023d0fd12b894f48bd2001"
	Feb 17 11:58:26 ha-783738 kubelet[1591]: E0217 11:58:26.495523    1591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-783738_kube-system(77f0e47471ffa89381403ccfd101e5e7)\"" pod="kube-system/kube-apiserver-ha-783738" podUID="77f0e47471ffa89381403ccfd101e5e7"
	Feb 17 11:58:26 ha-783738 kubelet[1591]: E0217 11:58:26.703334    1591 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-783738\" not found"
	Feb 17 11:58:27 ha-783738 kubelet[1591]: E0217 11:58:27.238622    1591 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host" event="&Event{ObjectMeta:{ha-783738.1824fce9ab5e06e9  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-783738,UID:ha-783738,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ha-783738,},FirstTimestamp:2025-02-17 11:57:16.604499689 +0000 UTC m=+0.220042798,LastTimestamp:2025-02-17 11:57:16.604499689 +0000 UTC m=+0.220042798,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-783738,}"
	Feb 17 11:58:30 ha-783738 kubelet[1591]: E0217 11:58:30.957653    1591 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-783738\" not found" node="ha-783738"
	Feb 17 11:58:30 ha-783738 kubelet[1591]: I0217 11:58:30.957784    1591 scope.go:117] "RemoveContainer" containerID="0d8dd6abc6b0262f0e2de062685df6bbc87187dd14023d0fd12b894f48bd2001"
	Feb 17 11:58:30 ha-783738 kubelet[1591]: E0217 11:58:30.957928    1591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-783738_kube-system(77f0e47471ffa89381403ccfd101e5e7)\"" pod="kube-system/kube-apiserver-ha-783738" podUID="77f0e47471ffa89381403ccfd101e5e7"
	Feb 17 11:58:31 ha-783738 kubelet[1591]: I0217 11:58:31.169391    1591 kubelet_node_status.go:76] "Attempting to register node" node="ha-783738"
	Feb 17 11:58:32 ha-783738 kubelet[1591]: E0217 11:58:32.182236    1591 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-783738\" not found" node="ha-783738"
	Feb 17 11:58:32 ha-783738 kubelet[1591]: I0217 11:58:32.182362    1591 scope.go:117] "RemoveContainer" containerID="2e90f752fdc0601abb5401e228fa8355b97462cfd9f4dafb766f56eaf8e7b13a"
	Feb 17 11:58:32 ha-783738 kubelet[1591]: E0217 11:58:32.182489    1591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-783738_kube-system(37cb2af166ca362ca24afd5a80241d47)\"" pod="kube-system/kube-controller-manager-ha-783738" podUID="37cb2af166ca362ca24afd5a80241d47"
	Feb 17 11:58:33 ha-783738 kubelet[1591]: E0217 11:58:33.382650    1591 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.254:8443: connect: no route to host" node="ha-783738"
	Feb 17 11:58:33 ha-783738 kubelet[1591]: E0217 11:58:33.382815    1591 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-783738?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host" interval="7s"
	Feb 17 11:58:33 ha-783738 kubelet[1591]: W0217 11:58:33.382655    1591 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-783738&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	Feb 17 11:58:33 ha-783738 kubelet[1591]: E0217 11:58:33.383127    1591 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-783738&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	Feb 17 11:58:36 ha-783738 kubelet[1591]: E0217 11:58:36.704343    1591 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-783738\" not found"
	Feb 17 11:58:37 ha-783738 kubelet[1591]: E0217 11:58:37.748003    1591 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-783738\" not found" node="ha-783738"
	Feb 17 11:58:39 ha-783738 kubelet[1591]: E0217 11:58:39.526616    1591 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host" event="&Event{ObjectMeta:{ha-783738.1824fce9ab5e06e9  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-783738,UID:ha-783738,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ha-783738,},FirstTimestamp:2025-02-17 11:57:16.604499689 +0000 UTC m=+0.220042798,LastTimestamp:2025-02-17 11:57:16.604499689 +0000 UTC m=+0.220042798,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-783738,}"
	Feb 17 11:58:39 ha-783738 kubelet[1591]: E0217 11:58:39.748034    1591 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-783738\" not found" node="ha-783738"
	Feb 17 11:58:40 ha-783738 kubelet[1591]: I0217 11:58:40.384759    1591 kubelet_node_status.go:76] "Attempting to register node" node="ha-783738"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-783738 -n ha-783738
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-783738 -n ha-783738: exit status 2 (233.658036ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-783738" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (112.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:415: expected profile "ha-783738" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-783738\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-783738\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"D
ockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.32.1\",\"ClusterName\":\"ha-783738\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.249\",\"Port\":8443,\"KubernetesVersion\":\"v1.32.1\
",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.31\",\"Port\":8443,\"KubernetesVersion\":\"v1.32.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.168\",\"Port\":0,\"KubernetesVersion\":\"v1.32.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"o
lm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"
DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-783738 -n ha-783738
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-783738 -n ha-783738: exit status 2 (225.61388ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 logs -n 25
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterClusterRestart logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-783738 cp ha-783738-m03:/home/docker/cp-test.txt                              | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:50 UTC | 17 Feb 25 11:50 UTC |
	|         | ha-783738-m04:/home/docker/cp-test_ha-783738-m03_ha-783738-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-783738 ssh -n                                                                 | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:50 UTC | 17 Feb 25 11:50 UTC |
	|         | ha-783738-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-783738 ssh -n ha-783738-m04 sudo cat                                          | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:50 UTC | 17 Feb 25 11:50 UTC |
	|         | /home/docker/cp-test_ha-783738-m03_ha-783738-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-783738 cp testdata/cp-test.txt                                                | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:50 UTC | 17 Feb 25 11:50 UTC |
	|         | ha-783738-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-783738 ssh -n                                                                 | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:50 UTC | 17 Feb 25 11:50 UTC |
	|         | ha-783738-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-783738 cp ha-783738-m04:/home/docker/cp-test.txt                              | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:50 UTC | 17 Feb 25 11:51 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3703533036/001/cp-test_ha-783738-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-783738 ssh -n                                                                 | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:51 UTC | 17 Feb 25 11:51 UTC |
	|         | ha-783738-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-783738 cp ha-783738-m04:/home/docker/cp-test.txt                              | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:51 UTC | 17 Feb 25 11:51 UTC |
	|         | ha-783738:/home/docker/cp-test_ha-783738-m04_ha-783738.txt                       |           |         |         |                     |                     |
	| ssh     | ha-783738 ssh -n                                                                 | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:51 UTC | 17 Feb 25 11:51 UTC |
	|         | ha-783738-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-783738 ssh -n ha-783738 sudo cat                                              | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:51 UTC | 17 Feb 25 11:51 UTC |
	|         | /home/docker/cp-test_ha-783738-m04_ha-783738.txt                                 |           |         |         |                     |                     |
	| cp      | ha-783738 cp ha-783738-m04:/home/docker/cp-test.txt                              | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:51 UTC | 17 Feb 25 11:51 UTC |
	|         | ha-783738-m02:/home/docker/cp-test_ha-783738-m04_ha-783738-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-783738 ssh -n                                                                 | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:51 UTC | 17 Feb 25 11:51 UTC |
	|         | ha-783738-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-783738 ssh -n ha-783738-m02 sudo cat                                          | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:51 UTC | 17 Feb 25 11:51 UTC |
	|         | /home/docker/cp-test_ha-783738-m04_ha-783738-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-783738 cp ha-783738-m04:/home/docker/cp-test.txt                              | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:51 UTC | 17 Feb 25 11:51 UTC |
	|         | ha-783738-m03:/home/docker/cp-test_ha-783738-m04_ha-783738-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-783738 ssh -n                                                                 | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:51 UTC | 17 Feb 25 11:51 UTC |
	|         | ha-783738-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-783738 ssh -n ha-783738-m03 sudo cat                                          | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:51 UTC | 17 Feb 25 11:51 UTC |
	|         | /home/docker/cp-test_ha-783738-m04_ha-783738-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-783738 node stop m02 -v=7                                                     | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:51 UTC | 17 Feb 25 11:51 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-783738 node start m02 -v=7                                                    | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:51 UTC | 17 Feb 25 11:51 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-783738 -v=7                                                           | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:52 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-783738 -v=7                                                                | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:52 UTC | 17 Feb 25 11:52 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-783738 --wait=true -v=7                                                    | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:52 UTC | 17 Feb 25 11:56 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-783738                                                                | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:56 UTC |                     |
	| node    | ha-783738 node delete m03 -v=7                                                   | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:56 UTC | 17 Feb 25 11:56 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-783738 stop -v=7                                                              | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:56 UTC | 17 Feb 25 11:56 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-783738 --wait=true                                                         | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:56 UTC |                     |
	|         | -v=7 --alsologtostderr                                                           |           |         |         |                     |                     |
	|         | --driver=kvm2                                                                    |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/17 11:56:50
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0217 11:56:50.215291  100380 out.go:345] Setting OutFile to fd 1 ...
	I0217 11:56:50.215609  100380 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 11:56:50.215619  100380 out.go:358] Setting ErrFile to fd 2...
	I0217 11:56:50.215624  100380 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 11:56:50.215819  100380 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20427-77349/.minikube/bin
	I0217 11:56:50.216353  100380 out.go:352] Setting JSON to false
	I0217 11:56:50.217237  100380 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":5958,"bootTime":1739787452,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0217 11:56:50.217362  100380 start.go:139] virtualization: kvm guest
	I0217 11:56:50.219910  100380 out.go:177] * [ha-783738] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0217 11:56:50.221323  100380 out.go:177]   - MINIKUBE_LOCATION=20427
	I0217 11:56:50.221334  100380 notify.go:220] Checking for updates...
	I0217 11:56:50.223835  100380 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0217 11:56:50.224954  100380 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20427-77349/kubeconfig
	I0217 11:56:50.226180  100380 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20427-77349/.minikube
	I0217 11:56:50.227361  100380 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0217 11:56:50.228473  100380 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0217 11:56:50.229885  100380 config.go:182] Loaded profile config "ha-783738": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0217 11:56:50.230261  100380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0217 11:56:50.230308  100380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0217 11:56:50.245239  100380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46091
	I0217 11:56:50.245761  100380 main.go:141] libmachine: () Calling .GetVersion
	I0217 11:56:50.246359  100380 main.go:141] libmachine: Using API Version  1
	I0217 11:56:50.246382  100380 main.go:141] libmachine: () Calling .SetConfigRaw
	I0217 11:56:50.246775  100380 main.go:141] libmachine: () Calling .GetMachineName
	I0217 11:56:50.246962  100380 main.go:141] libmachine: (ha-783738) Calling .DriverName
	I0217 11:56:50.247230  100380 driver.go:394] Setting default libvirt URI to qemu:///system
	I0217 11:56:50.247538  100380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0217 11:56:50.247594  100380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0217 11:56:50.262713  100380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36011
	I0217 11:56:50.263097  100380 main.go:141] libmachine: () Calling .GetVersion
	I0217 11:56:50.263692  100380 main.go:141] libmachine: Using API Version  1
	I0217 11:56:50.263752  100380 main.go:141] libmachine: () Calling .SetConfigRaw
	I0217 11:56:50.264059  100380 main.go:141] libmachine: () Calling .GetMachineName
	I0217 11:56:50.264289  100380 main.go:141] libmachine: (ha-783738) Calling .DriverName
	I0217 11:56:50.297981  100380 out.go:177] * Using the kvm2 driver based on existing profile
	I0217 11:56:50.299143  100380 start.go:297] selected driver: kvm2
	I0217 11:56:50.299155  100380 start.go:901] validating driver "kvm2" against &{Name:ha-783738 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-78
3738 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.31 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.168 Port:0 KubernetesVersion:v1.32.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0217 11:56:50.299304  100380 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0217 11:56:50.299646  100380 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0217 11:56:50.299706  100380 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20427-77349/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0217 11:56:50.314229  100380 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0217 11:56:50.314917  100380 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0217 11:56:50.314949  100380 cni.go:84] Creating CNI manager for ""
	I0217 11:56:50.315000  100380 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0217 11:56:50.315060  100380 start.go:340] cluster config:
	{Name:ha-783738 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-783738 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.31 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.168 Port:0 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kub
eflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0217 11:56:50.315190  100380 iso.go:125] acquiring lock: {Name:mk4380b7bda8fcd8bced9705ff1695c3fb7dac0d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0217 11:56:50.317519  100380 out.go:177] * Starting "ha-783738" primary control-plane node in "ha-783738" cluster
	I0217 11:56:50.318547  100380 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0217 11:56:50.318578  100380 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20427-77349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4
	I0217 11:56:50.318588  100380 cache.go:56] Caching tarball of preloaded images
	I0217 11:56:50.318681  100380 preload.go:172] Found /home/jenkins/minikube-integration/20427-77349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0217 11:56:50.318695  100380 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on docker
	I0217 11:56:50.318829  100380 profile.go:143] Saving config to /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/config.json ...
	I0217 11:56:50.319009  100380 start.go:360] acquireMachinesLock for ha-783738: {Name:mk05ba8323ae77ab7dcc14c378d65810d956fdc0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0217 11:56:50.319055  100380 start.go:364] duration metric: took 23.519µs to acquireMachinesLock for "ha-783738"
	I0217 11:56:50.319080  100380 start.go:96] Skipping create...Using existing machine configuration
	I0217 11:56:50.319088  100380 fix.go:54] fixHost starting: 
	I0217 11:56:50.319353  100380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0217 11:56:50.319391  100380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0217 11:56:50.333761  100380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34803
	I0217 11:56:50.334152  100380 main.go:141] libmachine: () Calling .GetVersion
	I0217 11:56:50.334693  100380 main.go:141] libmachine: Using API Version  1
	I0217 11:56:50.334714  100380 main.go:141] libmachine: () Calling .SetConfigRaw
	I0217 11:56:50.335000  100380 main.go:141] libmachine: () Calling .GetMachineName
	I0217 11:56:50.335210  100380 main.go:141] libmachine: (ha-783738) Calling .DriverName
	I0217 11:56:50.335347  100380 main.go:141] libmachine: (ha-783738) Calling .GetState
	I0217 11:56:50.336730  100380 fix.go:112] recreateIfNeeded on ha-783738: state=Stopped err=<nil>
	I0217 11:56:50.336752  100380 main.go:141] libmachine: (ha-783738) Calling .DriverName
	W0217 11:56:50.336864  100380 fix.go:138] unexpected machine state, will restart: <nil>
	I0217 11:56:50.338814  100380 out.go:177] * Restarting existing kvm2 VM for "ha-783738" ...
	I0217 11:56:50.340020  100380 main.go:141] libmachine: (ha-783738) Calling .Start
	I0217 11:56:50.340200  100380 main.go:141] libmachine: (ha-783738) starting domain...
	I0217 11:56:50.340221  100380 main.go:141] libmachine: (ha-783738) ensuring networks are active...
	I0217 11:56:50.340845  100380 main.go:141] libmachine: (ha-783738) Ensuring network default is active
	I0217 11:56:50.341268  100380 main.go:141] libmachine: (ha-783738) Ensuring network mk-ha-783738 is active
	I0217 11:56:50.341612  100380 main.go:141] libmachine: (ha-783738) getting domain XML...
	I0217 11:56:50.342286  100380 main.go:141] libmachine: (ha-783738) creating domain...
	I0217 11:56:51.533335  100380 main.go:141] libmachine: (ha-783738) waiting for IP...
	I0217 11:56:51.534198  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:56:51.534571  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:56:51.534631  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:56:51.534554  100416 retry.go:31] will retry after 214.112758ms: waiting for domain to come up
	I0217 11:56:51.750038  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:56:51.750535  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:56:51.750587  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:56:51.750528  100416 retry.go:31] will retry after 287.575076ms: waiting for domain to come up
	I0217 11:56:52.040019  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:56:52.040473  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:56:52.040515  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:56:52.040452  100416 retry.go:31] will retry after 303.389275ms: waiting for domain to come up
	I0217 11:56:52.345057  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:56:52.345400  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:56:52.345452  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:56:52.345383  100416 retry.go:31] will retry after 580.610288ms: waiting for domain to come up
	I0217 11:56:52.927102  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:56:52.927623  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:56:52.927663  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:56:52.927596  100416 retry.go:31] will retry after 470.88869ms: waiting for domain to come up
	I0217 11:56:53.400293  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:56:53.400698  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:56:53.400725  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:56:53.400636  100416 retry.go:31] will retry after 645.102407ms: waiting for domain to come up
	I0217 11:56:54.046798  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:56:54.047309  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:56:54.047365  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:56:54.047265  100416 retry.go:31] will retry after 993.016218ms: waiting for domain to come up
	I0217 11:56:55.041450  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:56:55.041808  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:56:55.041828  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:56:55.041790  100416 retry.go:31] will retry after 1.096274529s: waiting for domain to come up
	I0217 11:56:56.139475  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:56:56.139892  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:56:56.139957  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:56:56.139882  100416 retry.go:31] will retry after 1.840421804s: waiting for domain to come up
	I0217 11:56:57.981618  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:56:57.982040  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:56:57.982068  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:56:57.981979  100416 retry.go:31] will retry after 1.8969141s: waiting for domain to come up
	I0217 11:56:59.881026  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:56:59.881535  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:56:59.881570  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:56:59.881471  100416 retry.go:31] will retry after 1.890240518s: waiting for domain to come up
	I0217 11:57:01.773274  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:01.773728  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:57:01.773779  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:57:01.773696  100416 retry.go:31] will retry after 3.046762911s: waiting for domain to come up
	I0217 11:57:04.823999  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:04.824458  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:57:04.824497  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:57:04.824453  100416 retry.go:31] will retry after 3.819063496s: waiting for domain to come up
	I0217 11:57:08.647831  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:08.648309  100380 main.go:141] libmachine: (ha-783738) found domain IP: 192.168.39.249
	I0217 11:57:08.648334  100380 main.go:141] libmachine: (ha-783738) reserving static IP address...
	I0217 11:57:08.648347  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has current primary IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:08.648799  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "ha-783738", mac: "52:54:00:fb:6f:65", ip: "192.168.39.249"} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:08.648824  100380 main.go:141] libmachine: (ha-783738) DBG | skip adding static IP to network mk-ha-783738 - found existing host DHCP lease matching {name: "ha-783738", mac: "52:54:00:fb:6f:65", ip: "192.168.39.249"}
	I0217 11:57:08.648835  100380 main.go:141] libmachine: (ha-783738) reserved static IP address 192.168.39.249 for domain ha-783738
	I0217 11:57:08.648846  100380 main.go:141] libmachine: (ha-783738) waiting for SSH...
	I0217 11:57:08.648862  100380 main.go:141] libmachine: (ha-783738) DBG | Getting to WaitForSSH function...
	I0217 11:57:08.650828  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:08.651193  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:08.651224  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:08.651387  100380 main.go:141] libmachine: (ha-783738) DBG | Using SSH client type: external
	I0217 11:57:08.651414  100380 main.go:141] libmachine: (ha-783738) DBG | Using SSH private key: /home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738/id_rsa (-rw-------)
	I0217 11:57:08.651435  100380 main.go:141] libmachine: (ha-783738) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.249 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0217 11:57:08.651464  100380 main.go:141] libmachine: (ha-783738) DBG | About to run SSH command:
	I0217 11:57:08.651480  100380 main.go:141] libmachine: (ha-783738) DBG | exit 0
	I0217 11:57:08.776922  100380 main.go:141] libmachine: (ha-783738) DBG | SSH cmd err, output: <nil>: 
	I0217 11:57:08.777326  100380 main.go:141] libmachine: (ha-783738) Calling .GetConfigRaw
	I0217 11:57:08.777959  100380 main.go:141] libmachine: (ha-783738) Calling .GetIP
	I0217 11:57:08.780301  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:08.780692  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:08.780735  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:08.780948  100380 profile.go:143] Saving config to /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/config.json ...
	I0217 11:57:08.781137  100380 machine.go:93] provisionDockerMachine start ...
	I0217 11:57:08.781154  100380 main.go:141] libmachine: (ha-783738) Calling .DriverName
	I0217 11:57:08.781442  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:08.783478  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:08.783868  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:08.783897  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:08.784048  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHPort
	I0217 11:57:08.784237  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:08.784393  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:08.784570  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHUsername
	I0217 11:57:08.784738  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:08.784917  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0217 11:57:08.784928  100380 main.go:141] libmachine: About to run SSH command:
	hostname
	I0217 11:57:08.889484  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0217 11:57:08.889525  100380 main.go:141] libmachine: (ha-783738) Calling .GetMachineName
	I0217 11:57:08.889783  100380 buildroot.go:166] provisioning hostname "ha-783738"
	I0217 11:57:08.889818  100380 main.go:141] libmachine: (ha-783738) Calling .GetMachineName
	I0217 11:57:08.890003  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:08.892666  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:08.893027  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:08.893060  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:08.893202  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHPort
	I0217 11:57:08.893391  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:08.893536  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:08.893661  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHUsername
	I0217 11:57:08.893787  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:08.893949  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0217 11:57:08.893960  100380 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-783738 && echo "ha-783738" | sudo tee /etc/hostname
	I0217 11:57:09.014626  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-783738
	
	I0217 11:57:09.014653  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:09.017274  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.017710  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:09.017744  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.017939  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHPort
	I0217 11:57:09.018131  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:09.018348  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:09.018473  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHUsername
	I0217 11:57:09.018701  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:09.018967  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0217 11:57:09.018994  100380 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-783738' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-783738/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-783738' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0217 11:57:09.133208  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0217 11:57:09.133247  100380 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20427-77349/.minikube CaCertPath:/home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20427-77349/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20427-77349/.minikube}
	I0217 11:57:09.133278  100380 buildroot.go:174] setting up certificates
	I0217 11:57:09.133295  100380 provision.go:84] configureAuth start
	I0217 11:57:09.133331  100380 main.go:141] libmachine: (ha-783738) Calling .GetMachineName
	I0217 11:57:09.133680  100380 main.go:141] libmachine: (ha-783738) Calling .GetIP
	I0217 11:57:09.136393  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.136746  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:09.136771  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.136918  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:09.139192  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.139545  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:09.139583  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.139699  100380 provision.go:143] copyHostCerts
	I0217 11:57:09.139734  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20427-77349/.minikube/ca.pem
	I0217 11:57:09.139786  100380 exec_runner.go:144] found /home/jenkins/minikube-integration/20427-77349/.minikube/ca.pem, removing ...
	I0217 11:57:09.139804  100380 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20427-77349/.minikube/ca.pem
	I0217 11:57:09.139883  100380 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20427-77349/.minikube/ca.pem (1082 bytes)
	I0217 11:57:09.139996  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20427-77349/.minikube/cert.pem
	I0217 11:57:09.140023  100380 exec_runner.go:144] found /home/jenkins/minikube-integration/20427-77349/.minikube/cert.pem, removing ...
	I0217 11:57:09.140030  100380 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20427-77349/.minikube/cert.pem
	I0217 11:57:09.140079  100380 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20427-77349/.minikube/cert.pem (1123 bytes)
	I0217 11:57:09.140159  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20427-77349/.minikube/key.pem
	I0217 11:57:09.140184  100380 exec_runner.go:144] found /home/jenkins/minikube-integration/20427-77349/.minikube/key.pem, removing ...
	I0217 11:57:09.140191  100380 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20427-77349/.minikube/key.pem
	I0217 11:57:09.140228  100380 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20427-77349/.minikube/key.pem (1675 bytes)
	I0217 11:57:09.140314  100380 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20427-77349/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca-key.pem org=jenkins.ha-783738 san=[127.0.0.1 192.168.39.249 ha-783738 localhost minikube]
	I0217 11:57:09.269804  100380 provision.go:177] copyRemoteCerts
	I0217 11:57:09.269900  100380 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0217 11:57:09.269935  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:09.272592  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.272916  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:09.272945  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.273095  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHPort
	I0217 11:57:09.273282  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:09.273464  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHUsername
	I0217 11:57:09.273600  100380 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738/id_rsa Username:docker}
	I0217 11:57:09.355256  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0217 11:57:09.355331  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0217 11:57:09.378132  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0217 11:57:09.378243  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0217 11:57:09.399749  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0217 11:57:09.399830  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0217 11:57:09.421183  100380 provision.go:87] duration metric: took 287.855291ms to configureAuth
	I0217 11:57:09.421207  100380 buildroot.go:189] setting minikube options for container-runtime
	I0217 11:57:09.421432  100380 config.go:182] Loaded profile config "ha-783738": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0217 11:57:09.421460  100380 main.go:141] libmachine: (ha-783738) Calling .DriverName
	I0217 11:57:09.421765  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:09.424701  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.425141  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:09.425173  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.425370  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHPort
	I0217 11:57:09.425557  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:09.425734  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:09.425883  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHUsername
	I0217 11:57:09.426059  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:09.426283  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0217 11:57:09.426297  100380 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0217 11:57:09.534976  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0217 11:57:09.535006  100380 buildroot.go:70] root file system type: tmpfs
	I0217 11:57:09.535125  100380 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0217 11:57:09.535163  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:09.537739  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.538108  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:09.538126  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.538307  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHPort
	I0217 11:57:09.538481  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:09.538662  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:09.538802  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHUsername
	I0217 11:57:09.538949  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:09.539142  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0217 11:57:09.539243  100380 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0217 11:57:09.658326  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0217 11:57:09.658371  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:09.661372  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.661838  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:09.661875  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.662085  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHPort
	I0217 11:57:09.662300  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:09.662435  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:09.662559  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHUsername
	I0217 11:57:09.662707  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:09.662897  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0217 11:57:09.662913  100380 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0217 11:57:11.588699  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0217 11:57:11.588766  100380 machine.go:96] duration metric: took 2.807616414s to provisionDockerMachine
	I0217 11:57:11.588782  100380 start.go:293] postStartSetup for "ha-783738" (driver="kvm2")
	I0217 11:57:11.588792  100380 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0217 11:57:11.588810  100380 main.go:141] libmachine: (ha-783738) Calling .DriverName
	I0217 11:57:11.589177  100380 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0217 11:57:11.589221  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:11.592192  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:11.592596  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:11.592627  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:11.592785  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHPort
	I0217 11:57:11.592979  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:11.593170  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHUsername
	I0217 11:57:11.593334  100380 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738/id_rsa Username:docker}
	I0217 11:57:11.675232  100380 ssh_runner.go:195] Run: cat /etc/os-release
	I0217 11:57:11.679319  100380 info.go:137] Remote host: Buildroot 2023.02.9
	I0217 11:57:11.679347  100380 filesync.go:126] Scanning /home/jenkins/minikube-integration/20427-77349/.minikube/addons for local assets ...
	I0217 11:57:11.679434  100380 filesync.go:126] Scanning /home/jenkins/minikube-integration/20427-77349/.minikube/files for local assets ...
	I0217 11:57:11.679553  100380 filesync.go:149] local asset: /home/jenkins/minikube-integration/20427-77349/.minikube/files/etc/ssl/certs/845022.pem -> 845022.pem in /etc/ssl/certs
	I0217 11:57:11.679569  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/files/etc/ssl/certs/845022.pem -> /etc/ssl/certs/845022.pem
	I0217 11:57:11.679700  100380 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0217 11:57:11.688596  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/files/etc/ssl/certs/845022.pem --> /etc/ssl/certs/845022.pem (1708 bytes)
	I0217 11:57:11.712948  100380 start.go:296] duration metric: took 124.147315ms for postStartSetup
	I0217 11:57:11.713041  100380 main.go:141] libmachine: (ha-783738) Calling .DriverName
	I0217 11:57:11.713388  100380 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0217 11:57:11.713431  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:11.716109  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:11.716482  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:11.716509  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:11.716697  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHPort
	I0217 11:57:11.716902  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:11.717111  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHUsername
	I0217 11:57:11.717237  100380 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738/id_rsa Username:docker}
	I0217 11:57:11.799568  100380 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0217 11:57:11.799647  100380 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0217 11:57:11.840659  100380 fix.go:56] duration metric: took 21.521561421s for fixHost
	I0217 11:57:11.840710  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:11.843711  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:11.844159  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:11.844211  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:11.844334  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHPort
	I0217 11:57:11.844538  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:11.844685  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:11.844877  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHUsername
	I0217 11:57:11.845064  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:11.845292  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0217 11:57:11.845324  100380 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0217 11:57:11.961693  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739793431.919777749
	
	I0217 11:57:11.961720  100380 fix.go:216] guest clock: 1739793431.919777749
	I0217 11:57:11.961728  100380 fix.go:229] Guest: 2025-02-17 11:57:11.919777749 +0000 UTC Remote: 2025-02-17 11:57:11.840688548 +0000 UTC m=+21.663425668 (delta=79.089201ms)
	I0217 11:57:11.961764  100380 fix.go:200] guest clock delta is within tolerance: 79.089201ms
	I0217 11:57:11.961771  100380 start.go:83] releasing machines lock for "ha-783738", held for 21.642703542s
	I0217 11:57:11.961797  100380 main.go:141] libmachine: (ha-783738) Calling .DriverName
	I0217 11:57:11.962076  100380 main.go:141] libmachine: (ha-783738) Calling .GetIP
	I0217 11:57:11.964739  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:11.965072  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:11.965098  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:11.965245  100380 main.go:141] libmachine: (ha-783738) Calling .DriverName
	I0217 11:57:11.965780  100380 main.go:141] libmachine: (ha-783738) Calling .DriverName
	I0217 11:57:11.965938  100380 main.go:141] libmachine: (ha-783738) Calling .DriverName
	I0217 11:57:11.966020  100380 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0217 11:57:11.966085  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:11.966153  100380 ssh_runner.go:195] Run: cat /version.json
	I0217 11:57:11.966182  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:11.968710  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:11.968814  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:11.969180  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:11.969211  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:11.969228  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:11.969243  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:11.969400  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHPort
	I0217 11:57:11.969505  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHPort
	I0217 11:57:11.969573  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:11.969654  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:11.969705  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHUsername
	I0217 11:57:11.969780  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHUsername
	I0217 11:57:11.969855  100380 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738/id_rsa Username:docker}
	I0217 11:57:11.969896  100380 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738/id_rsa Username:docker}
	I0217 11:57:12.070993  100380 ssh_runner.go:195] Run: systemctl --version
	I0217 11:57:12.076962  100380 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0217 11:57:12.082069  100380 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0217 11:57:12.082164  100380 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0217 11:57:12.097308  100380 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0217 11:57:12.097353  100380 start.go:495] detecting cgroup driver to use...
	I0217 11:57:12.097502  100380 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0217 11:57:12.116857  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0217 11:57:12.128177  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0217 11:57:12.139383  100380 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0217 11:57:12.139433  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0217 11:57:12.150535  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0217 11:57:12.161824  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0217 11:57:12.173075  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0217 11:57:12.184735  100380 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0217 11:57:12.196065  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0217 11:57:12.206061  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0217 11:57:12.215826  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0217 11:57:12.225719  100380 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0217 11:57:12.234589  100380 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0217 11:57:12.234644  100380 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0217 11:57:12.244581  100380 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0217 11:57:12.253602  100380 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0217 11:57:12.359116  100380 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0217 11:57:12.382906  100380 start.go:495] detecting cgroup driver to use...
	I0217 11:57:12.383010  100380 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0217 11:57:12.408300  100380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0217 11:57:12.424027  100380 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0217 11:57:12.444833  100380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0217 11:57:12.457628  100380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0217 11:57:12.470140  100380 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0217 11:57:12.497764  100380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0217 11:57:12.511071  100380 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0217 11:57:12.529141  100380 ssh_runner.go:195] Run: which cri-dockerd
	I0217 11:57:12.532846  100380 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0217 11:57:12.541895  100380 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0217 11:57:12.557198  100380 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0217 11:57:12.670128  100380 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0217 11:57:12.796263  100380 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0217 11:57:12.796399  100380 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0217 11:57:12.812229  100380 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0217 11:57:12.923350  100380 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0217 11:57:15.351609  100380 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.428206669s)
	I0217 11:57:15.351699  100380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0217 11:57:15.364852  100380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0217 11:57:15.377423  100380 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0217 11:57:15.493635  100380 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0217 11:57:15.621524  100380 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0217 11:57:15.730858  100380 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0217 11:57:15.748138  100380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0217 11:57:15.761818  100380 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0217 11:57:15.881775  100380 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0217 11:57:15.960772  100380 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0217 11:57:15.960858  100380 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0217 11:57:15.966411  100380 start.go:563] Will wait 60s for crictl version
	I0217 11:57:15.966517  100380 ssh_runner.go:195] Run: which crictl
	I0217 11:57:15.974036  100380 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0217 11:57:16.011837  100380 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0217 11:57:16.011912  100380 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0217 11:57:16.036945  100380 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0217 11:57:16.060974  100380 out.go:235] * Preparing Kubernetes v1.32.1 on Docker 27.4.0 ...
	I0217 11:57:16.061031  100380 main.go:141] libmachine: (ha-783738) Calling .GetIP
	I0217 11:57:16.063810  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:16.064255  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:16.064298  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:16.064499  100380 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0217 11:57:16.068464  100380 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0217 11:57:16.080668  100380 kubeadm.go:883] updating cluster {Name:ha-783738 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-783738 Namespace:
default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.31 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.168 Port:0 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-
gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0217 11:57:16.080804  100380 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0217 11:57:16.080849  100380 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0217 11:57:16.098890  100380 docker.go:689] Got preloaded images: -- stdout --
	kindest/kindnetd:v20250214-acbabc1a
	registry.k8s.io/kube-apiserver:v1.32.1
	registry.k8s.io/kube-scheduler:v1.32.1
	registry.k8s.io/kube-controller-manager:v1.32.1
	registry.k8s.io/kube-proxy:v1.32.1
	ghcr.io/kube-vip/kube-vip:v0.8.9
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0217 11:57:16.098911  100380 docker.go:619] Images already preloaded, skipping extraction
	I0217 11:57:16.098974  100380 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0217 11:57:16.116506  100380 docker.go:689] Got preloaded images: -- stdout --
	kindest/kindnetd:v20250214-acbabc1a
	registry.k8s.io/kube-apiserver:v1.32.1
	registry.k8s.io/kube-scheduler:v1.32.1
	registry.k8s.io/kube-controller-manager:v1.32.1
	registry.k8s.io/kube-proxy:v1.32.1
	ghcr.io/kube-vip/kube-vip:v0.8.9
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0217 11:57:16.116540  100380 cache_images.go:84] Images are preloaded, skipping loading
	I0217 11:57:16.116556  100380 kubeadm.go:934] updating node { 192.168.39.249 8443 v1.32.1 docker true true} ...
	I0217 11:57:16.116703  100380 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-783738 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.249
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:ha-783738 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0217 11:57:16.116764  100380 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0217 11:57:16.164431  100380 cni.go:84] Creating CNI manager for ""
	I0217 11:57:16.164455  100380 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0217 11:57:16.164469  100380 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0217 11:57:16.164499  100380 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.249 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-783738 NodeName:ha-783738 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.249"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.249 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0217 11:57:16.164682  100380 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.249
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-783738"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.249"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.249"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0217 11:57:16.164704  100380 kube-vip.go:115] generating kube-vip config ...
	I0217 11:57:16.164766  100380 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0217 11:57:16.178981  100380 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0217 11:57:16.179102  100380 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.9
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0217 11:57:16.179161  100380 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0217 11:57:16.189237  100380 binaries.go:44] Found k8s binaries, skipping transfer
	I0217 11:57:16.189321  100380 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0217 11:57:16.198727  100380 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I0217 11:57:16.214787  100380 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0217 11:57:16.231014  100380 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I0217 11:57:16.246729  100380 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0217 11:57:16.261779  100380 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0217 11:57:16.265453  100380 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0217 11:57:16.276521  100380 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0217 11:57:16.384249  100380 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0217 11:57:16.401291  100380 certs.go:68] Setting up /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738 for IP: 192.168.39.249
	I0217 11:57:16.401328  100380 certs.go:194] generating shared ca certs ...
	I0217 11:57:16.401350  100380 certs.go:226] acquiring lock for ca certs: {Name:mk7093571229e43ae88bf2507ccc9fd2cd05388e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0217 11:57:16.401508  100380 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20427-77349/.minikube/ca.key
	I0217 11:57:16.401544  100380 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20427-77349/.minikube/proxy-client-ca.key
	I0217 11:57:16.401555  100380 certs.go:256] generating profile certs ...
	I0217 11:57:16.401635  100380 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/client.key
	I0217 11:57:16.401660  100380 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.key.1b1cbf3b
	I0217 11:57:16.401671  100380 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.crt.1b1cbf3b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.249 192.168.39.31 192.168.39.254]
	I0217 11:57:16.475033  100380 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.crt.1b1cbf3b ...
	I0217 11:57:16.475062  100380 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.crt.1b1cbf3b: {Name:mkcae1f9f128e66451afcd5b133e6826e9862cbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0217 11:57:16.475228  100380 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.key.1b1cbf3b ...
	I0217 11:57:16.475243  100380 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.key.1b1cbf3b: {Name:mk484c481609a3c2ed473dfecb8f5468118b1367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0217 11:57:16.475330  100380 certs.go:381] copying /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.crt.1b1cbf3b -> /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.crt
	I0217 11:57:16.475492  100380 certs.go:385] copying /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.key.1b1cbf3b -> /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.key
	I0217 11:57:16.475629  100380 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/proxy-client.key
	I0217 11:57:16.475644  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0217 11:57:16.475656  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0217 11:57:16.475671  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0217 11:57:16.475699  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0217 11:57:16.475714  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0217 11:57:16.475726  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0217 11:57:16.475737  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0217 11:57:16.475748  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0217 11:57:16.475800  100380 certs.go:484] found cert: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/84502.pem (1338 bytes)
	W0217 11:57:16.475831  100380 certs.go:480] ignoring /home/jenkins/minikube-integration/20427-77349/.minikube/certs/84502_empty.pem, impossibly tiny 0 bytes
	I0217 11:57:16.475839  100380 certs.go:484] found cert: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca-key.pem (1679 bytes)
	I0217 11:57:16.475861  100380 certs.go:484] found cert: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem (1082 bytes)
	I0217 11:57:16.475900  100380 certs.go:484] found cert: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/cert.pem (1123 bytes)
	I0217 11:57:16.475927  100380 certs.go:484] found cert: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/key.pem (1675 bytes)
	I0217 11:57:16.476002  100380 certs.go:484] found cert: /home/jenkins/minikube-integration/20427-77349/.minikube/files/etc/ssl/certs/845022.pem (1708 bytes)
	I0217 11:57:16.476031  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/84502.pem -> /usr/share/ca-certificates/84502.pem
	I0217 11:57:16.476046  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/files/etc/ssl/certs/845022.pem -> /usr/share/ca-certificates/845022.pem
	I0217 11:57:16.476058  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0217 11:57:16.476652  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0217 11:57:16.507138  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0217 11:57:16.534527  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0217 11:57:16.562922  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0217 11:57:16.587311  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0217 11:57:16.624087  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0217 11:57:16.662037  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0217 11:57:16.713619  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0217 11:57:16.756345  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/certs/84502.pem --> /usr/share/ca-certificates/84502.pem (1338 bytes)
	I0217 11:57:16.803520  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/files/etc/ssl/certs/845022.pem --> /usr/share/ca-certificates/845022.pem (1708 bytes)
	I0217 11:57:16.846879  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0217 11:57:16.920267  100380 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0217 11:57:16.950648  100380 ssh_runner.go:195] Run: openssl version
	I0217 11:57:16.958784  100380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84502.pem && ln -fs /usr/share/ca-certificates/84502.pem /etc/ssl/certs/84502.pem"
	I0217 11:57:16.987238  100380 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84502.pem
	I0217 11:57:16.994220  100380 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 17 11:42 /usr/share/ca-certificates/84502.pem
	I0217 11:57:16.994283  100380 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84502.pem
	I0217 11:57:17.016466  100380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84502.pem /etc/ssl/certs/51391683.0"
	I0217 11:57:17.039972  100380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/845022.pem && ln -fs /usr/share/ca-certificates/845022.pem /etc/ssl/certs/845022.pem"
	I0217 11:57:17.061818  100380 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/845022.pem
	I0217 11:57:17.068988  100380 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 17 11:42 /usr/share/ca-certificates/845022.pem
	I0217 11:57:17.069057  100380 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/845022.pem
	I0217 11:57:17.075953  100380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/845022.pem /etc/ssl/certs/3ec20f2e.0"
	I0217 11:57:17.094161  100380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0217 11:57:17.111313  100380 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0217 11:57:17.116268  100380 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 17 11:35 /usr/share/ca-certificates/minikubeCA.pem
	I0217 11:57:17.116335  100380 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0217 11:57:17.122743  100380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0217 11:57:17.141827  100380 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0217 11:57:17.146771  100380 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0217 11:57:17.158301  100380 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0217 11:57:17.170200  100380 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0217 11:57:17.177413  100380 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0217 11:57:17.186556  100380 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0217 11:57:17.193933  100380 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0217 11:57:17.203839  100380 kubeadm.go:392] StartCluster: {Name:ha-783738 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-783738 Namespace:def
ault APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.31 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.168 Port:0 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gad
get:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0217 11:57:17.204089  100380 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0217 11:57:17.225257  100380 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0217 11:57:17.236858  100380 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0217 11:57:17.236876  100380 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0217 11:57:17.236920  100380 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0217 11:57:17.246285  100380 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0217 11:57:17.246828  100380 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-783738" does not appear in /home/jenkins/minikube-integration/20427-77349/kubeconfig
	I0217 11:57:17.246986  100380 kubeconfig.go:62] /home/jenkins/minikube-integration/20427-77349/kubeconfig needs updating (will repair): [kubeconfig missing "ha-783738" cluster setting kubeconfig missing "ha-783738" context setting]
	I0217 11:57:17.247367  100380 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20427-77349/kubeconfig: {Name:mka23a5c17f10bb58374e83755a2ac6a44464e11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0217 11:57:17.247895  100380 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/20427-77349/kubeconfig
	I0217 11:57:17.248117  100380 kapi.go:59] client config for ha-783738: &rest.Config{Host:"https://192.168.39.249:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/client.crt", KeyFile:"/home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/client.key", CAFile:"/home/jenkins/minikube-integration/20427-77349/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24df700), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0217 11:57:17.248591  100380 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0217 11:57:17.248610  100380 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0217 11:57:17.248615  100380 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0217 11:57:17.248619  100380 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0217 11:57:17.248634  100380 cert_rotation.go:140] Starting client certificate rotation controller
	I0217 11:57:17.249054  100380 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0217 11:57:17.258029  100380 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.249
	I0217 11:57:17.258053  100380 kubeadm.go:597] duration metric: took 21.170416ms to restartPrimaryControlPlane
	I0217 11:57:17.258062  100380 kubeadm.go:394] duration metric: took 54.240079ms to StartCluster
	I0217 11:57:17.258077  100380 settings.go:142] acquiring lock: {Name:mkf730c657b1c2d5a481dbeb02dabe7dfa17f2d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0217 11:57:17.258150  100380 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20427-77349/kubeconfig
	I0217 11:57:17.258639  100380 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20427-77349/kubeconfig: {Name:mka23a5c17f10bb58374e83755a2ac6a44464e11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0217 11:57:17.258848  100380 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0217 11:57:17.258870  100380 start.go:241] waiting for startup goroutines ...
	I0217 11:57:17.258884  100380 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0217 11:57:17.259112  100380 config.go:182] Loaded profile config "ha-783738": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0217 11:57:17.261397  100380 out.go:177] * Enabled addons: 
	I0217 11:57:17.262668  100380 addons.go:514] duration metric: took 3.785415ms for enable addons: enabled=[]
	I0217 11:57:17.262703  100380 start.go:246] waiting for cluster config update ...
	I0217 11:57:17.262713  100380 start.go:255] writing updated cluster config ...
	I0217 11:57:17.264127  100380 out.go:201] 
	I0217 11:57:17.265577  100380 config.go:182] Loaded profile config "ha-783738": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0217 11:57:17.265703  100380 profile.go:143] Saving config to /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/config.json ...
	I0217 11:57:17.267570  100380 out.go:177] * Starting "ha-783738-m02" control-plane node in "ha-783738" cluster
	I0217 11:57:17.268921  100380 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0217 11:57:17.268950  100380 cache.go:56] Caching tarball of preloaded images
	I0217 11:57:17.269061  100380 preload.go:172] Found /home/jenkins/minikube-integration/20427-77349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0217 11:57:17.269074  100380 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on docker
	I0217 11:57:17.269250  100380 profile.go:143] Saving config to /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/config.json ...
	I0217 11:57:17.269484  100380 start.go:360] acquireMachinesLock for ha-783738-m02: {Name:mk05ba8323ae77ab7dcc14c378d65810d956fdc0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0217 11:57:17.269554  100380 start.go:364] duration metric: took 46.103µs to acquireMachinesLock for "ha-783738-m02"
	I0217 11:57:17.269576  100380 start.go:96] Skipping create...Using existing machine configuration
	I0217 11:57:17.269584  100380 fix.go:54] fixHost starting: m02
	I0217 11:57:17.269846  100380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0217 11:57:17.269891  100380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0217 11:57:17.284961  100380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45093
	I0217 11:57:17.285438  100380 main.go:141] libmachine: () Calling .GetVersion
	I0217 11:57:17.285964  100380 main.go:141] libmachine: Using API Version  1
	I0217 11:57:17.285991  100380 main.go:141] libmachine: () Calling .SetConfigRaw
	I0217 11:57:17.286358  100380 main.go:141] libmachine: () Calling .GetMachineName
	I0217 11:57:17.286562  100380 main.go:141] libmachine: (ha-783738-m02) Calling .DriverName
	I0217 11:57:17.286744  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetState
	I0217 11:57:17.288288  100380 fix.go:112] recreateIfNeeded on ha-783738-m02: state=Stopped err=<nil>
	I0217 11:57:17.288317  100380 main.go:141] libmachine: (ha-783738-m02) Calling .DriverName
	W0217 11:57:17.288473  100380 fix.go:138] unexpected machine state, will restart: <nil>
	I0217 11:57:17.290496  100380 out.go:177] * Restarting existing kvm2 VM for "ha-783738-m02" ...
	I0217 11:57:17.291737  100380 main.go:141] libmachine: (ha-783738-m02) Calling .Start
	I0217 11:57:17.291936  100380 main.go:141] libmachine: (ha-783738-m02) starting domain...
	I0217 11:57:17.291957  100380 main.go:141] libmachine: (ha-783738-m02) ensuring networks are active...
	I0217 11:57:17.292625  100380 main.go:141] libmachine: (ha-783738-m02) Ensuring network default is active
	I0217 11:57:17.292935  100380 main.go:141] libmachine: (ha-783738-m02) Ensuring network mk-ha-783738 is active
	I0217 11:57:17.293260  100380 main.go:141] libmachine: (ha-783738-m02) getting domain XML...
	I0217 11:57:17.293893  100380 main.go:141] libmachine: (ha-783738-m02) creating domain...
	I0217 11:57:18.506378  100380 main.go:141] libmachine: (ha-783738-m02) waiting for IP...
	I0217 11:57:18.507364  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:18.507881  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:18.507974  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:18.507878  100573 retry.go:31] will retry after 190.071186ms: waiting for domain to come up
	I0217 11:57:18.699203  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:18.699617  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:18.699682  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:18.699590  100573 retry.go:31] will retry after 254.022024ms: waiting for domain to come up
	I0217 11:57:18.955132  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:18.955578  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:18.955602  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:18.955533  100573 retry.go:31] will retry after 332.594264ms: waiting for domain to come up
	I0217 11:57:19.290041  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:19.290494  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:19.290519  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:19.290472  100573 retry.go:31] will retry after 550.484931ms: waiting for domain to come up
	I0217 11:57:19.842363  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:19.842844  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:19.842873  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:19.842822  100573 retry.go:31] will retry after 743.60757ms: waiting for domain to come up
	I0217 11:57:20.587667  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:20.588025  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:20.588058  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:20.587981  100573 retry.go:31] will retry after 701.750144ms: waiting for domain to come up
	I0217 11:57:21.290980  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:21.291500  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:21.291530  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:21.291445  100573 retry.go:31] will retry after 755.313925ms: waiting for domain to come up
	I0217 11:57:22.047876  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:22.048286  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:22.048318  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:22.048246  100573 retry.go:31] will retry after 1.338224716s: waiting for domain to come up
	I0217 11:57:23.388238  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:23.388759  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:23.388796  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:23.388727  100573 retry.go:31] will retry after 1.367661407s: waiting for domain to come up
	I0217 11:57:24.758376  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:24.758722  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:24.758764  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:24.758718  100573 retry.go:31] will retry after 2.08548116s: waiting for domain to come up
	I0217 11:57:26.846621  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:26.847150  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:26.847253  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:26.847166  100573 retry.go:31] will retry after 1.933968455s: waiting for domain to come up
	I0217 11:57:28.782369  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:28.782785  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:28.782815  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:28.782752  100573 retry.go:31] will retry after 3.162167749s: waiting for domain to come up
	I0217 11:57:31.947188  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:31.947578  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:31.947603  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:31.947545  100573 retry.go:31] will retry after 3.924986004s: waiting for domain to come up
	I0217 11:57:35.877102  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:35.877437  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has current primary IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:35.877460  100380 main.go:141] libmachine: (ha-783738-m02) found domain IP: 192.168.39.31
	I0217 11:57:35.877473  100380 main.go:141] libmachine: (ha-783738-m02) reserving static IP address...
	I0217 11:57:35.877915  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "ha-783738-m02", mac: "52:54:00:06:81:a2", ip: "192.168.39.31"} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:35.877942  100380 main.go:141] libmachine: (ha-783738-m02) DBG | skip adding static IP to network mk-ha-783738 - found existing host DHCP lease matching {name: "ha-783738-m02", mac: "52:54:00:06:81:a2", ip: "192.168.39.31"}
	I0217 11:57:35.877960  100380 main.go:141] libmachine: (ha-783738-m02) reserved static IP address 192.168.39.31 for domain ha-783738-m02
	I0217 11:57:35.877972  100380 main.go:141] libmachine: (ha-783738-m02) waiting for SSH...
	I0217 11:57:35.877983  100380 main.go:141] libmachine: (ha-783738-m02) DBG | Getting to WaitForSSH function...
	I0217 11:57:35.880382  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:35.880801  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:35.880830  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:35.880903  100380 main.go:141] libmachine: (ha-783738-m02) DBG | Using SSH client type: external
	I0217 11:57:35.880925  100380 main.go:141] libmachine: (ha-783738-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738-m02/id_rsa (-rw-------)
	I0217 11:57:35.880955  100380 main.go:141] libmachine: (ha-783738-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.31 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0217 11:57:35.880970  100380 main.go:141] libmachine: (ha-783738-m02) DBG | About to run SSH command:
	I0217 11:57:35.880982  100380 main.go:141] libmachine: (ha-783738-m02) DBG | exit 0
	I0217 11:57:36.005182  100380 main.go:141] libmachine: (ha-783738-m02) DBG | SSH cmd err, output: <nil>: 
	I0217 11:57:36.005527  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetConfigRaw
	I0217 11:57:36.006216  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetIP
	I0217 11:57:36.008704  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.009084  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:36.009118  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.009443  100380 profile.go:143] Saving config to /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/config.json ...
	I0217 11:57:36.009639  100380 machine.go:93] provisionDockerMachine start ...
	I0217 11:57:36.009657  100380 main.go:141] libmachine: (ha-783738-m02) Calling .DriverName
	I0217 11:57:36.009816  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	I0217 11:57:36.011849  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.012187  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:36.012218  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.012360  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHPort
	I0217 11:57:36.012557  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.012710  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.012836  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHUsername
	I0217 11:57:36.012947  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:36.013115  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0217 11:57:36.013130  100380 main.go:141] libmachine: About to run SSH command:
	hostname
	I0217 11:57:36.113056  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0217 11:57:36.113093  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetMachineName
	I0217 11:57:36.113376  100380 buildroot.go:166] provisioning hostname "ha-783738-m02"
	I0217 11:57:36.113403  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetMachineName
	I0217 11:57:36.113566  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	I0217 11:57:36.116233  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.116606  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:36.116634  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.116762  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHPort
	I0217 11:57:36.116907  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.117025  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.117242  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHUsername
	I0217 11:57:36.117464  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:36.117681  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0217 11:57:36.117699  100380 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-783738-m02 && echo "ha-783738-m02" | sudo tee /etc/hostname
	I0217 11:57:36.230628  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-783738-m02
	
	I0217 11:57:36.230670  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	I0217 11:57:36.233644  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.233991  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:36.234015  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.234196  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHPort
	I0217 11:57:36.234491  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.234686  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.234856  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHUsername
	I0217 11:57:36.235006  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:36.235194  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0217 11:57:36.235211  100380 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-783738-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-783738-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-783738-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0217 11:57:36.341290  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0217 11:57:36.341332  100380 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20427-77349/.minikube CaCertPath:/home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20427-77349/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20427-77349/.minikube}
	I0217 11:57:36.341348  100380 buildroot.go:174] setting up certificates
	I0217 11:57:36.341360  100380 provision.go:84] configureAuth start
	I0217 11:57:36.341373  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetMachineName
	I0217 11:57:36.341646  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetIP
	I0217 11:57:36.344453  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.344944  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:36.344981  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.345158  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	I0217 11:57:36.347416  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.347719  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:36.347744  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.347910  100380 provision.go:143] copyHostCerts
	I0217 11:57:36.347943  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20427-77349/.minikube/ca.pem
	I0217 11:57:36.347989  100380 exec_runner.go:144] found /home/jenkins/minikube-integration/20427-77349/.minikube/ca.pem, removing ...
	I0217 11:57:36.347999  100380 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20427-77349/.minikube/ca.pem
	I0217 11:57:36.348065  100380 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20427-77349/.minikube/ca.pem (1082 bytes)
	I0217 11:57:36.348156  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20427-77349/.minikube/cert.pem
	I0217 11:57:36.348190  100380 exec_runner.go:144] found /home/jenkins/minikube-integration/20427-77349/.minikube/cert.pem, removing ...
	I0217 11:57:36.348200  100380 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20427-77349/.minikube/cert.pem
	I0217 11:57:36.348229  100380 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20427-77349/.minikube/cert.pem (1123 bytes)
	I0217 11:57:36.348286  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20427-77349/.minikube/key.pem
	I0217 11:57:36.348310  100380 exec_runner.go:144] found /home/jenkins/minikube-integration/20427-77349/.minikube/key.pem, removing ...
	I0217 11:57:36.348320  100380 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20427-77349/.minikube/key.pem
	I0217 11:57:36.348347  100380 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20427-77349/.minikube/key.pem (1675 bytes)
	I0217 11:57:36.348413  100380 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20427-77349/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca-key.pem org=jenkins.ha-783738-m02 san=[127.0.0.1 192.168.39.31 ha-783738-m02 localhost minikube]
	I0217 11:57:36.476199  100380 provision.go:177] copyRemoteCerts
	I0217 11:57:36.476256  100380 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0217 11:57:36.476280  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	I0217 11:57:36.479126  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.479497  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:36.479529  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.479677  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHPort
	I0217 11:57:36.479868  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.480073  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHUsername
	I0217 11:57:36.480258  100380 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738-m02/id_rsa Username:docker}
	I0217 11:57:36.558954  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0217 11:57:36.559023  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0217 11:57:36.581755  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0217 11:57:36.581816  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0217 11:57:36.604328  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0217 11:57:36.604411  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0217 11:57:36.626183  100380 provision.go:87] duration metric: took 284.807453ms to configureAuth
	I0217 11:57:36.626219  100380 buildroot.go:189] setting minikube options for container-runtime
	I0217 11:57:36.626492  100380 config.go:182] Loaded profile config "ha-783738": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0217 11:57:36.626522  100380 main.go:141] libmachine: (ha-783738-m02) Calling .DriverName
	I0217 11:57:36.626768  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	I0217 11:57:36.629194  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.629569  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:36.629594  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.629740  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHPort
	I0217 11:57:36.629904  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.630077  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.630201  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHUsername
	I0217 11:57:36.630389  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:36.630601  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0217 11:57:36.630614  100380 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0217 11:57:36.730964  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0217 11:57:36.730995  100380 buildroot.go:70] root file system type: tmpfs
	I0217 11:57:36.731148  100380 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0217 11:57:36.731184  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	I0217 11:57:36.733718  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.734119  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:36.734150  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.734340  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHPort
	I0217 11:57:36.734539  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.734714  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.734847  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHUsername
	I0217 11:57:36.734986  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:36.735198  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0217 11:57:36.735304  100380 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.39.249"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0217 11:57:36.846599  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.39.249
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0217 11:57:36.846633  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	I0217 11:57:36.849370  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.849714  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:36.849733  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.849923  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHPort
	I0217 11:57:36.850116  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.850290  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.850443  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHUsername
	I0217 11:57:36.850608  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:36.850788  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0217 11:57:36.850805  100380 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0217 11:57:38.700010  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0217 11:57:38.700036  100380 machine.go:96] duration metric: took 2.690384734s to provisionDockerMachine
	I0217 11:57:38.700051  100380 start.go:293] postStartSetup for "ha-783738-m02" (driver="kvm2")
	I0217 11:57:38.700060  100380 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0217 11:57:38.700075  100380 main.go:141] libmachine: (ha-783738-m02) Calling .DriverName
	I0217 11:57:38.700389  100380 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0217 11:57:38.700425  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	I0217 11:57:38.703068  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:38.703435  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:38.703465  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:38.703605  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHPort
	I0217 11:57:38.703807  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:38.703952  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHUsername
	I0217 11:57:38.704102  100380 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738-m02/id_rsa Username:docker}
	I0217 11:57:38.783381  100380 ssh_runner.go:195] Run: cat /etc/os-release
	I0217 11:57:38.787188  100380 info.go:137] Remote host: Buildroot 2023.02.9
	I0217 11:57:38.787215  100380 filesync.go:126] Scanning /home/jenkins/minikube-integration/20427-77349/.minikube/addons for local assets ...
	I0217 11:57:38.787270  100380 filesync.go:126] Scanning /home/jenkins/minikube-integration/20427-77349/.minikube/files for local assets ...
	I0217 11:57:38.787341  100380 filesync.go:149] local asset: /home/jenkins/minikube-integration/20427-77349/.minikube/files/etc/ssl/certs/845022.pem -> 845022.pem in /etc/ssl/certs
	I0217 11:57:38.787352  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/files/etc/ssl/certs/845022.pem -> /etc/ssl/certs/845022.pem
	I0217 11:57:38.787430  100380 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0217 11:57:38.796091  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/files/etc/ssl/certs/845022.pem --> /etc/ssl/certs/845022.pem (1708 bytes)
	I0217 11:57:38.817716  100380 start.go:296] duration metric: took 117.649565ms for postStartSetup
	I0217 11:57:38.817759  100380 main.go:141] libmachine: (ha-783738-m02) Calling .DriverName
	I0217 11:57:38.818052  100380 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0217 11:57:38.818087  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	I0217 11:57:38.820354  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:38.820669  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:38.820694  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:38.820809  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHPort
	I0217 11:57:38.820978  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:38.821138  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHUsername
	I0217 11:57:38.821273  100380 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738-m02/id_rsa Username:docker}
	I0217 11:57:38.900214  100380 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0217 11:57:38.900294  100380 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0217 11:57:38.959273  100380 fix.go:56] duration metric: took 21.689681729s for fixHost
	I0217 11:57:38.959327  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	I0217 11:57:38.961853  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:38.962326  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:38.962364  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:38.962591  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHPort
	I0217 11:57:38.962788  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:38.962952  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:38.963062  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHUsername
	I0217 11:57:38.963238  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:38.963408  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0217 11:57:38.963419  100380 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0217 11:57:39.071315  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739793459.049434891
	
	I0217 11:57:39.071339  100380 fix.go:216] guest clock: 1739793459.049434891
	I0217 11:57:39.071349  100380 fix.go:229] Guest: 2025-02-17 11:57:39.049434891 +0000 UTC Remote: 2025-02-17 11:57:38.959302801 +0000 UTC m=+48.782039917 (delta=90.13209ms)
	I0217 11:57:39.071366  100380 fix.go:200] guest clock delta is within tolerance: 90.13209ms
	I0217 11:57:39.071371  100380 start.go:83] releasing machines lock for "ha-783738-m02", held for 21.801804436s
	I0217 11:57:39.071393  100380 main.go:141] libmachine: (ha-783738-m02) Calling .DriverName
	I0217 11:57:39.071600  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetIP
	I0217 11:57:39.074321  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:39.074707  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:39.074736  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:39.076949  100380 out.go:177] * Found network options:
	I0217 11:57:39.078428  100380 out.go:177]   - NO_PROXY=192.168.39.249
	W0217 11:57:39.079686  100380 proxy.go:119] fail to check proxy env: Error ip not in block
	I0217 11:57:39.079714  100380 main.go:141] libmachine: (ha-783738-m02) Calling .DriverName
	I0217 11:57:39.080218  100380 main.go:141] libmachine: (ha-783738-m02) Calling .DriverName
	I0217 11:57:39.080403  100380 main.go:141] libmachine: (ha-783738-m02) Calling .DriverName
	I0217 11:57:39.080510  100380 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0217 11:57:39.080551  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	W0217 11:57:39.080631  100380 proxy.go:119] fail to check proxy env: Error ip not in block
	I0217 11:57:39.080722  100380 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0217 11:57:39.080748  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	I0217 11:57:39.083432  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:39.083453  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:39.083887  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:39.083914  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:39.083933  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:39.083949  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:39.084264  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHPort
	I0217 11:57:39.084411  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:39.084597  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHUsername
	I0217 11:57:39.084609  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHPort
	I0217 11:57:39.084763  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:39.084784  100380 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738-m02/id_rsa Username:docker}
	I0217 11:57:39.084915  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHUsername
	I0217 11:57:39.085034  100380 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738-m02/id_rsa Username:docker}
	W0217 11:57:39.178061  100380 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0217 11:57:39.178137  100380 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0217 11:57:39.195964  100380 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0217 11:57:39.196001  100380 start.go:495] detecting cgroup driver to use...
	I0217 11:57:39.196148  100380 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0217 11:57:39.216666  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0217 11:57:39.226815  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0217 11:57:39.236611  100380 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0217 11:57:39.236669  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0217 11:57:39.246500  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0217 11:57:39.256691  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0217 11:57:39.266509  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0217 11:57:39.276231  100380 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0217 11:57:39.286298  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0217 11:57:39.296149  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0217 11:57:39.305984  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0217 11:57:39.315650  100380 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0217 11:57:39.324721  100380 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0217 11:57:39.324777  100380 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0217 11:57:39.334429  100380 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0217 11:57:39.343052  100380 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0217 11:57:39.458041  100380 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0217 11:57:39.483361  100380 start.go:495] detecting cgroup driver to use...
	I0217 11:57:39.483453  100380 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0217 11:57:39.501404  100380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0217 11:57:39.522545  100380 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0217 11:57:39.545214  100380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0217 11:57:39.557462  100380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0217 11:57:39.569445  100380 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0217 11:57:39.593668  100380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0217 11:57:39.606767  100380 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0217 11:57:39.623713  100380 ssh_runner.go:195] Run: which cri-dockerd
	I0217 11:57:39.627306  100380 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0217 11:57:39.635920  100380 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0217 11:57:39.651184  100380 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0217 11:57:39.767938  100380 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0217 11:57:39.884761  100380 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0217 11:57:39.884806  100380 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0217 11:57:39.900934  100380 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0217 11:57:40.013206  100380 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0217 11:58:41.088581  100380 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.075335279s)
	I0217 11:58:41.088680  100380 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0217 11:58:41.109373  100380 out.go:201] 
	W0217 11:58:41.110918  100380 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Feb 17 11:57:37 ha-783738-m02 systemd[1]: Starting Docker Application Container Engine...
	Feb 17 11:57:37 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:37.207555071Z" level=info msg="Starting up"
	Feb 17 11:57:37 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:37.208523706Z" level=info msg="containerd not running, starting managed containerd"
	Feb 17 11:57:37 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:37.209284365Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=499
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.234357473Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.253922324Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.254071326Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.254155313Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.254195097Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.254502645Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.254572700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.254826671Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.254880442Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.254926515Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.254965881Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.255209553Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.255502921Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.257578132Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.257723954Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.257912930Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.257960933Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.258214223Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.258292090Z" level=info msg="metadata content store policy set" policy=shared
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.262281766Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.262389757Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.262437193Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.262478052Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.262523730Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.262614966Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.262915194Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263049035Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263094390Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263137669Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263176270Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263217488Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263254710Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263292496Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263339613Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263377065Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263418085Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263453223Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263511094Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263549833Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263589341Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263631649Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263726157Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263766086Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263809930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263847665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263885358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263932212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263972615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264020660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264063975Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264103157Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264158305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264194401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264230305Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264327104Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264417123Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264457690Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264499822Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264534568Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264575047Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264616722Z" level=info msg="NRI interface is disabled by configuration."
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264938960Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.265032087Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.265091203Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.265132167Z" level=info msg="containerd successfully booted in 0.032037s"
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.237803305Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.295143778Z" level=info msg="Loading containers: start."
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.484051173Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.565431513Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.632528889Z" level=info msg="Loading containers: done."
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.653906274Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.653941707Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.653962858Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.654196375Z" level=info msg="Daemon has completed initialization"
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.676178691Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.676315120Z" level=info msg="API listen on [::]:2376"
	Feb 17 11:57:38 ha-783738-m02 systemd[1]: Started Docker Application Container Engine.
	Feb 17 11:57:40 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:40.005718953Z" level=info msg="Processing signal 'terminated'"
	Feb 17 11:57:40 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:40.007186879Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Feb 17 11:57:40 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:40.007378782Z" level=info msg="Daemon shutdown complete"
	Feb 17 11:57:40 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:40.007446197Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Feb 17 11:57:40 ha-783738-m02 systemd[1]: Stopping Docker Application Container Engine...
	Feb 17 11:57:40 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:40.008214930Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Feb 17 11:57:41 ha-783738-m02 systemd[1]: docker.service: Deactivated successfully.
	Feb 17 11:57:41 ha-783738-m02 systemd[1]: Stopped Docker Application Container Engine.
	Feb 17 11:57:41 ha-783738-m02 systemd[1]: Starting Docker Application Container Engine...
	Feb 17 11:57:41 ha-783738-m02 dockerd[1120]: time="2025-02-17T11:57:41.051838490Z" level=info msg="Starting up"
	Feb 17 11:58:41 ha-783738-m02 dockerd[1120]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Feb 17 11:58:41 ha-783738-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Feb 17 11:58:41 ha-783738-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Feb 17 11:58:41 ha-783738-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0217 11:58:41.110964  100380 out.go:270] * 
	W0217 11:58:41.111815  100380 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0217 11:58:41.113412  100380 out.go:201] 
	
	
	==> Docker <==
	Feb 17 11:57:23 ha-783738 dockerd[1134]: time="2025-02-17T11:57:23.574956613Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 17 11:57:44 ha-783738 dockerd[1126]: time="2025-02-17T11:57:44.652472286Z" level=info msg="ignoring event" container=0eab009d1fe54d541fe5b166302e5af1a153e8aa37ad6a133704c1f40918f7c9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 17 11:57:44 ha-783738 dockerd[1134]: time="2025-02-17T11:57:44.653058320Z" level=info msg="shim disconnected" id=0eab009d1fe54d541fe5b166302e5af1a153e8aa37ad6a133704c1f40918f7c9 namespace=moby
	Feb 17 11:57:44 ha-783738 dockerd[1134]: time="2025-02-17T11:57:44.653483834Z" level=warning msg="cleaning up after shim disconnected" id=0eab009d1fe54d541fe5b166302e5af1a153e8aa37ad6a133704c1f40918f7c9 namespace=moby
	Feb 17 11:57:44 ha-783738 dockerd[1134]: time="2025-02-17T11:57:44.653545740Z" level=info msg="cleaning up dead shim" namespace=moby
	Feb 17 11:57:45 ha-783738 dockerd[1126]: time="2025-02-17T11:57:45.663576348Z" level=info msg="ignoring event" container=1683ded4f12ef91eea7067f33248f5185b17f0532a1c1480efe277bcd8accfe6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 17 11:57:45 ha-783738 dockerd[1134]: time="2025-02-17T11:57:45.664110377Z" level=info msg="shim disconnected" id=1683ded4f12ef91eea7067f33248f5185b17f0532a1c1480efe277bcd8accfe6 namespace=moby
	Feb 17 11:57:45 ha-783738 dockerd[1134]: time="2025-02-17T11:57:45.664165013Z" level=warning msg="cleaning up after shim disconnected" id=1683ded4f12ef91eea7067f33248f5185b17f0532a1c1480efe277bcd8accfe6 namespace=moby
	Feb 17 11:57:45 ha-783738 dockerd[1134]: time="2025-02-17T11:57:45.664175956Z" level=info msg="cleaning up dead shim" namespace=moby
	Feb 17 11:58:04 ha-783738 dockerd[1134]: time="2025-02-17T11:58:04.854960498Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 17 11:58:04 ha-783738 dockerd[1134]: time="2025-02-17T11:58:04.855123802Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 17 11:58:04 ha-783738 dockerd[1134]: time="2025-02-17T11:58:04.855151191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 17 11:58:04 ha-783738 dockerd[1134]: time="2025-02-17T11:58:04.855373177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 17 11:58:04 ha-783738 dockerd[1134]: time="2025-02-17T11:58:04.858152322Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 17 11:58:04 ha-783738 dockerd[1134]: time="2025-02-17T11:58:04.858222102Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 17 11:58:04 ha-783738 dockerd[1134]: time="2025-02-17T11:58:04.858232103Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 17 11:58:04 ha-783738 dockerd[1134]: time="2025-02-17T11:58:04.858372930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 17 11:58:25 ha-783738 dockerd[1126]: time="2025-02-17T11:58:25.325613613Z" level=info msg="ignoring event" container=0d8dd6abc6b0262f0e2de062685df6bbc87187dd14023d0fd12b894f48bd2001 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 17 11:58:25 ha-783738 dockerd[1134]: time="2025-02-17T11:58:25.326644755Z" level=info msg="shim disconnected" id=0d8dd6abc6b0262f0e2de062685df6bbc87187dd14023d0fd12b894f48bd2001 namespace=moby
	Feb 17 11:58:25 ha-783738 dockerd[1134]: time="2025-02-17T11:58:25.326737271Z" level=warning msg="cleaning up after shim disconnected" id=0d8dd6abc6b0262f0e2de062685df6bbc87187dd14023d0fd12b894f48bd2001 namespace=moby
	Feb 17 11:58:25 ha-783738 dockerd[1134]: time="2025-02-17T11:58:25.326756884Z" level=info msg="cleaning up dead shim" namespace=moby
	Feb 17 11:58:26 ha-783738 dockerd[1126]: time="2025-02-17T11:58:26.334899301Z" level=info msg="ignoring event" container=2e90f752fdc0601abb5401e228fa8355b97462cfd9f4dafb766f56eaf8e7b13a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 17 11:58:26 ha-783738 dockerd[1134]: time="2025-02-17T11:58:26.335703125Z" level=info msg="shim disconnected" id=2e90f752fdc0601abb5401e228fa8355b97462cfd9f4dafb766f56eaf8e7b13a namespace=moby
	Feb 17 11:58:26 ha-783738 dockerd[1134]: time="2025-02-17T11:58:26.335778773Z" level=warning msg="cleaning up after shim disconnected" id=2e90f752fdc0601abb5401e228fa8355b97462cfd9f4dafb766f56eaf8e7b13a namespace=moby
	Feb 17 11:58:26 ha-783738 dockerd[1134]: time="2025-02-17T11:58:26.335795547Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	2e90f752fdc06       019ee182b58e2       39 seconds ago       Exited              kube-controller-manager   4                   eeb1b6c34de35       kube-controller-manager-ha-783738
	0d8dd6abc6b02       95c0bda56fc4d       39 seconds ago       Exited              kube-apiserver            4                   a531c479908eb       kube-apiserver-ha-783738
	d524d25a3256e       2b0d6572d062c       About a minute ago   Running             kube-scheduler            2                   5633bc5aacc12       kube-scheduler-ha-783738
	2b8921c7d9f71       22f88dde2caa4       About a minute ago   Running             kube-vip                  1                   5f0329677cb70       kube-vip-ha-783738
	aeb757a6db075       a9e7e6b294baf       About a minute ago   Running             etcd                      2                   8c5c6a3fd0ba0       etcd-ha-783738
	8c236b02a8316       6e38f40d628db       4 minutes ago        Exited              storage-provisioner       3                   3b5478be91580       storage-provisioner
	f460be4118731       8c811b4aec35f       4 minutes ago        Exited              busybox                   1                   cd41205ee4990       busybox-58667487b6-mp8w2
	5caaef1da4142       e29f9c7391fd9       4 minutes ago        Exited              kube-proxy                1                   3bada7fe972b9       kube-proxy-pgwb4
	95f567924c5ee       c69fa2e9cbf5f       4 minutes ago        Exited              coredns                   1                   33c8d49183b1a       coredns-668d6bf9bc-bhrvt
	b4ccb469b39af       df3849d954c98       4 minutes ago        Exited              kindnet-cni               1                   bba5ce66a15dd       kindnet-t72ln
	b674f5b7afb38       c69fa2e9cbf5f       4 minutes ago        Exited              coredns                   1                   bfd8d387b7e96       coredns-668d6bf9bc-k5k72
	1395373a3c212       2b0d6572d062c       5 minutes ago        Exited              kube-scheduler            1                   fe3b7022472a7       kube-scheduler-ha-783738
	0644596c7e815       a9e7e6b294baf       5 minutes ago        Exited              etcd                      1                   a79f0d4414c0a       etcd-ha-783738
	905fe651f5a2d       22f88dde2caa4       5 minutes ago        Exited              kube-vip                  0                   6e727a24edb43       kube-vip-ha-783738
	
	
	==> coredns [95f567924c5e] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:54083 - 5538 "HINFO IN 6952713337195609451.67698316276633629. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.046526479s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[586752551]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Feb-2025 11:53:59.037) (total time: 30004ms):
	Trace[586752551]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30004ms (11:54:29.042)
	Trace[586752551]: [30.004932204s] [30.004932204s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[31748474]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Feb-2025 11:53:59.037) (total time: 30005ms):
	Trace[31748474]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30005ms (11:54:29.043)
	Trace[31748474]: [30.005260877s] [30.005260877s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1254162758]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Feb-2025 11:53:59.043) (total time: 30000ms):
	Trace[1254162758]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (11:54:29.044)
	Trace[1254162758]: [30.000938039s] [30.000938039s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b674f5b7afb3] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:47652 - 30454 "HINFO IN 3233588620932119307.6917908993167898246. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.026177844s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1310151553]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Feb-2025 11:53:59.042) (total time: 30001ms):
	Trace[1310151553]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (11:54:29.043)
	Trace[1310151553]: [30.001216976s] [30.001216976s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1951418715]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Feb-2025 11:53:59.039) (total time: 30005ms):
	Trace[1951418715]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30005ms (11:54:29.044)
	Trace[1951418715]: [30.005382964s] [30.005382964s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[606941673]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Feb-2025 11:53:59.038) (total time: 30006ms):
	Trace[606941673]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30006ms (11:54:29.044)
	Trace[606941673]: [30.006431575s] [30.006431575s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0217 11:58:43.760089    2895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0217 11:58:43.761876    2895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0217 11:58:43.763264    2895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0217 11:58:43.764782    2895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0217 11:58:43.766206    2895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Feb17 11:56] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052638] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037697] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.851026] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.992141] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[Feb17 11:57] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.664405] systemd-fstab-generator[476]: Ignoring "noauto" option for root device
	[  +0.058988] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058916] systemd-fstab-generator[488]: Ignoring "noauto" option for root device
	[  +2.348725] systemd-fstab-generator[1055]: Ignoring "noauto" option for root device
	[  +0.313948] systemd-fstab-generator[1092]: Ignoring "noauto" option for root device
	[  +0.110900] systemd-fstab-generator[1104]: Ignoring "noauto" option for root device
	[  +0.140552] systemd-fstab-generator[1118]: Ignoring "noauto" option for root device
	[  +2.263360] kauditd_printk_skb: 199 callbacks suppressed
	[  +0.301992] systemd-fstab-generator[1377]: Ignoring "noauto" option for root device
	[  +0.125509] systemd-fstab-generator[1390]: Ignoring "noauto" option for root device
	[  +0.118202] systemd-fstab-generator[1402]: Ignoring "noauto" option for root device
	[  +0.144218] systemd-fstab-generator[1417]: Ignoring "noauto" option for root device
	[  +0.508597] systemd-fstab-generator[1584]: Ignoring "noauto" option for root device
	[  +6.843964] kauditd_printk_skb: 180 callbacks suppressed
	[  +8.294455] kauditd_printk_skb: 40 callbacks suppressed
	
	
	==> etcd [0644596c7e81] <==
	{"level":"warn","ts":"2025-02-17T11:56:37.953386Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"416.799075ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/\" range_end:\"/registry/deployments0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2025-02-17T11:56:37.953402Z","caller":"traceutil/trace.go:171","msg":"trace[234534568] range","detail":"{range_begin:/registry/deployments/; range_end:/registry/deployments0; }","duration":"416.832899ms","start":"2025-02-17T11:56:37.536564Z","end":"2025-02-17T11:56:37.953396Z","steps":["trace[234534568] 'agreement among raft nodes before linearized reading'  (duration: 416.815476ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-17T11:56:37.953416Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-17T11:56:37.536510Z","time spent":"416.902435ms","remote":"127.0.0.1:58532","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":0,"response size":0,"request content":"key:\"/registry/deployments/\" range_end:\"/registry/deployments0\" count_only:true "}
	2025/02/17 11:56:37 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2025-02-17T11:56:37.953469Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.057072714s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"","error":"context canceled"}
	{"level":"info","ts":"2025-02-17T11:56:37.953479Z","caller":"traceutil/trace.go:171","msg":"trace[2020420396] range","detail":"{range_begin:/registry/health; range_end:; }","duration":"1.057490424s","start":"2025-02-17T11:56:36.895986Z","end":"2025-02-17T11:56:37.953476Z","steps":["trace[2020420396] 'agreement among raft nodes before linearized reading'  (duration: 1.057479846s)"],"step_count":1}
	{"level":"warn","ts":"2025-02-17T11:56:37.953491Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-17T11:56:36.895975Z","time spent":"1.057513489s","remote":"127.0.0.1:58120","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":0,"request content":"key:\"/registry/health\" "}
	2025/02/17 11:56:37 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2025-02-17T11:56:37.953557Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.889027766s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/podtemplates/\" range_end:\"/registry/podtemplates0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2025-02-17T11:56:37.953567Z","caller":"traceutil/trace.go:171","msg":"trace[159538693] range","detail":"{range_begin:/registry/podtemplates/; range_end:/registry/podtemplates0; }","duration":"1.889056203s","start":"2025-02-17T11:56:36.064508Z","end":"2025-02-17T11:56:37.953564Z","steps":["trace[159538693] 'agreement among raft nodes before linearized reading'  (duration: 1.88904446s)"],"step_count":1}
	{"level":"warn","ts":"2025-02-17T11:56:37.953580Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-17T11:56:36.064496Z","time spent":"1.889079683s","remote":"127.0.0.1:58254","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":0,"response size":0,"request content":"key:\"/registry/podtemplates/\" range_end:\"/registry/podtemplates0\" count_only:true "}
	2025/02/17 11:56:37 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2025-02-17T11:56:38.012328Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.249:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-02-17T11:56:38.012367Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.249:2379: use of closed network connection"}
	{"level":"info","ts":"2025-02-17T11:56:38.012413Z","caller":"etcdserver/server.go:1534","msg":"skipped leadership transfer; local server is not leader","local-member-id":"318ee90c3446d547","current-leader-member-id":"0"}
	{"level":"info","ts":"2025-02-17T11:56:38.012793Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"645ac05e9f2d470a"}
	{"level":"info","ts":"2025-02-17T11:56:38.012892Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"645ac05e9f2d470a"}
	{"level":"info","ts":"2025-02-17T11:56:38.012915Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"645ac05e9f2d470a"}
	{"level":"info","ts":"2025-02-17T11:56:38.012991Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"318ee90c3446d547","remote-peer-id":"645ac05e9f2d470a"}
	{"level":"info","ts":"2025-02-17T11:56:38.013022Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"318ee90c3446d547","remote-peer-id":"645ac05e9f2d470a"}
	{"level":"info","ts":"2025-02-17T11:56:38.013134Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"318ee90c3446d547","remote-peer-id":"645ac05e9f2d470a"}
	{"level":"info","ts":"2025-02-17T11:56:38.013145Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"645ac05e9f2d470a"}
	{"level":"info","ts":"2025-02-17T11:56:38.016636Z","caller":"embed/etcd.go:582","msg":"stopping serving peer traffic","address":"192.168.39.249:2380"}
	{"level":"info","ts":"2025-02-17T11:56:38.016720Z","caller":"embed/etcd.go:587","msg":"stopped serving peer traffic","address":"192.168.39.249:2380"}
	{"level":"info","ts":"2025-02-17T11:56:38.016728Z","caller":"embed/etcd.go:380","msg":"closed etcd server","name":"ha-783738","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.249:2380"],"advertise-client-urls":["https://192.168.39.249:2379"]}
	
	
	==> etcd [aeb757a6db07] <==
	{"level":"warn","ts":"2025-02-17T11:58:38.833992Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15368416165570069265,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-02-17T11:58:39.105914Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"645ac05e9f2d470a","rtt":"0s","error":"dial tcp 192.168.39.31:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-02-17T11:58:39.106133Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"645ac05e9f2d470a","rtt":"0s","error":"dial tcp 192.168.39.31:2380: connect: connection refused"}
	{"level":"info","ts":"2025-02-17T11:58:39.236323Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 is starting a new election at term 3"}
	{"level":"info","ts":"2025-02-17T11:58:39.236529Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 became pre-candidate at term 3"}
	{"level":"info","ts":"2025-02-17T11:58:39.236639Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 received MsgPreVoteResp from 318ee90c3446d547 at term 3"}
	{"level":"info","ts":"2025-02-17T11:58:39.236682Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 [logterm: 3, index: 3030] sent MsgPreVote request to 645ac05e9f2d470a at term 3"}
	{"level":"warn","ts":"2025-02-17T11:58:39.334913Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15368416165570069265,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-02-17T11:58:39.836002Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15368416165570069265,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-02-17T11:58:40.336905Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15368416165570069265,"retry-timeout":"500ms"}
	{"level":"info","ts":"2025-02-17T11:58:40.836559Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 is starting a new election at term 3"}
	{"level":"info","ts":"2025-02-17T11:58:40.836692Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 became pre-candidate at term 3"}
	{"level":"info","ts":"2025-02-17T11:58:40.836729Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 received MsgPreVoteResp from 318ee90c3446d547 at term 3"}
	{"level":"info","ts":"2025-02-17T11:58:40.836762Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 [logterm: 3, index: 3030] sent MsgPreVote request to 645ac05e9f2d470a at term 3"}
	{"level":"warn","ts":"2025-02-17T11:58:40.837045Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15368416165570069265,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-02-17T11:58:41.084143Z","caller":"etcdserver/server.go:2161","msg":"failed to publish local member to cluster through raft","local-member-id":"318ee90c3446d547","local-member-attributes":"{Name:ha-783738 ClientURLs:[https://192.168.39.249:2379]}","request-path":"/0/members/318ee90c3446d547/attributes","publish-timeout":"7s","error":"etcdserver: request timed out"}
	{"level":"warn","ts":"2025-02-17T11:58:41.337434Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15368416165570069265,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-02-17T11:58:41.827365Z","caller":"etcdserver/v3_server.go:932","msg":"timed out waiting for read index response (local node might have slow network)","timeout":"7s"}
	{"level":"warn","ts":"2025-02-17T11:58:41.827445Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"7.000504247s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: request timed out"}
	{"level":"info","ts":"2025-02-17T11:58:41.827469Z","caller":"traceutil/trace.go:171","msg":"trace[1958910963] range","detail":"{range_begin:; range_end:; }","duration":"7.000551306s","start":"2025-02-17T11:58:34.826907Z","end":"2025-02-17T11:58:41.827459Z","steps":["trace[1958910963] 'agreement among raft nodes before linearized reading'  (duration: 7.000502454s)"],"step_count":1}
	{"level":"error","ts":"2025-02-17T11:58:41.827501Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: request timed out\n[+]data_corruption ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2171\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2688\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:3142\nnet/http.(*conn).serve\n\tnet/http/server.go:2044"}
	{"level":"info","ts":"2025-02-17T11:58:42.436651Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 is starting a new election at term 3"}
	{"level":"info","ts":"2025-02-17T11:58:42.436750Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 became pre-candidate at term 3"}
	{"level":"info","ts":"2025-02-17T11:58:42.436772Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 received MsgPreVoteResp from 318ee90c3446d547 at term 3"}
	{"level":"info","ts":"2025-02-17T11:58:42.436803Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 [logterm: 3, index: 3030] sent MsgPreVote request to 645ac05e9f2d470a at term 3"}
	
	
	==> kernel <==
	 11:58:43 up 1 min,  0 users,  load average: 0.52, 0.30, 0.11
	Linux ha-783738 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [b4ccb469b39a] <==
	I0217 11:56:00.000922       1 main.go:324] Node ha-783738-m02 has CIDR [10.244.1.0/24] 
	I0217 11:56:00.001386       1 main.go:297] Handling node with IPs: map[192.168.39.216:{}]
	I0217 11:56:00.001417       1 main.go:324] Node ha-783738-m03 has CIDR [10.244.2.0/24] 
	I0217 11:56:00.002870       1 main.go:297] Handling node with IPs: map[192.168.39.168:{}]
	I0217 11:56:00.003089       1 main.go:324] Node ha-783738-m04 has CIDR [10.244.3.0/24] 
	I0217 11:56:10.003758       1 main.go:297] Handling node with IPs: map[192.168.39.31:{}]
	I0217 11:56:10.004120       1 main.go:324] Node ha-783738-m02 has CIDR [10.244.1.0/24] 
	I0217 11:56:10.004466       1 main.go:297] Handling node with IPs: map[192.168.39.216:{}]
	I0217 11:56:10.004579       1 main.go:324] Node ha-783738-m03 has CIDR [10.244.2.0/24] 
	I0217 11:56:10.004848       1 main.go:297] Handling node with IPs: map[192.168.39.168:{}]
	I0217 11:56:10.004993       1 main.go:324] Node ha-783738-m04 has CIDR [10.244.3.0/24] 
	I0217 11:56:10.005322       1 main.go:297] Handling node with IPs: map[192.168.39.249:{}]
	I0217 11:56:10.005440       1 main.go:301] handling current node
	I0217 11:56:20.008868       1 main.go:297] Handling node with IPs: map[192.168.39.249:{}]
	I0217 11:56:20.008992       1 main.go:301] handling current node
	I0217 11:56:20.009032       1 main.go:297] Handling node with IPs: map[192.168.39.31:{}]
	I0217 11:56:20.009107       1 main.go:324] Node ha-783738-m02 has CIDR [10.244.1.0/24] 
	I0217 11:56:20.009351       1 main.go:297] Handling node with IPs: map[192.168.39.168:{}]
	I0217 11:56:20.009426       1 main.go:324] Node ha-783738-m04 has CIDR [10.244.3.0/24] 
	I0217 11:56:30.000205       1 main.go:297] Handling node with IPs: map[192.168.39.168:{}]
	I0217 11:56:30.000320       1 main.go:324] Node ha-783738-m04 has CIDR [10.244.3.0/24] 
	I0217 11:56:30.000673       1 main.go:297] Handling node with IPs: map[192.168.39.249:{}]
	I0217 11:56:30.004120       1 main.go:301] handling current node
	I0217 11:56:30.004403       1 main.go:297] Handling node with IPs: map[192.168.39.31:{}]
	I0217 11:56:30.004484       1 main.go:324] Node ha-783738-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [0d8dd6abc6b0] <==
	W0217 11:58:05.008746       1 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags
	I0217 11:58:05.009254       1 options.go:238] external host was not specified, using 192.168.39.249
	I0217 11:58:05.012100       1 server.go:143] Version: v1.32.1
	I0217 11:58:05.012139       1 server.go:145] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0217 11:58:05.254592       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0217 11:58:05.265931       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0217 11:58:05.302917       1 plugins.go:157] Loaded 13 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I0217 11:58:05.302958       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0217 11:58:05.303380       1 instance.go:233] Using reconciler: lease
	W0217 11:58:25.253372       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0217 11:58:25.253478       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0217 11:58:25.304453       1 instance.go:226] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [2e90f752fdc0] <==
	I0217 11:58:05.575513       1 serving.go:386] Generated self-signed cert in-memory
	I0217 11:58:05.850219       1 controllermanager.go:185] "Starting" version="v1.32.1"
	I0217 11:58:05.850380       1 controllermanager.go:187] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0217 11:58:05.851835       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0217 11:58:05.852508       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0217 11:58:05.852713       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0217 11:58:05.852833       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0217 11:58:26.312388       1 controllermanager.go:230] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.249:8443/healthz\": dial tcp 192.168.39.249:8443: connect: connection refused"
	
	
	==> kube-proxy [5caaef1da414] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0217 11:53:59.616708       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0217 11:53:59.651486       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.249"]
	E0217 11:53:59.651650       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0217 11:53:59.696326       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0217 11:53:59.696377       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0217 11:53:59.696401       1 server_linux.go:170] "Using iptables Proxier"
	I0217 11:53:59.710221       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0217 11:53:59.711347       1 server.go:497] "Version info" version="v1.32.1"
	I0217 11:53:59.711380       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0217 11:53:59.716398       1 config.go:199] "Starting service config controller"
	I0217 11:53:59.717714       1 config.go:105] "Starting endpoint slice config controller"
	I0217 11:53:59.717746       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0217 11:53:59.718142       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0217 11:53:59.718615       1 config.go:329] "Starting node config controller"
	I0217 11:53:59.718758       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0217 11:53:59.817915       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0217 11:53:59.819456       1 shared_informer.go:320] Caches are synced for service config
	I0217 11:53:59.821373       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1395373a3c21] <==
	E0217 11:53:52.919534       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.249:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0217 11:53:53.771964       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.249:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0217 11:53:53.772105       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.249:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0217 11:53:55.316775       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.249:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0217 11:53:55.316841       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.249:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0217 11:53:55.317229       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.249:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0217 11:53:55.317287       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.249:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0217 11:53:55.599247       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.249:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0217 11:53:55.599332       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.249:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0217 11:53:55.855471       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.249:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0217 11:53:55.855524       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.249:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0217 11:53:56.059180       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.249:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0217 11:53:56.059238       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.249:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0217 11:53:59.073926       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0217 11:53:59.074031       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0217 11:53:59.074570       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0217 11:53:59.075126       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0217 11:53:59.075450       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0217 11:53:59.074624       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0217 11:54:13.896773       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0217 11:56:05.957670       1 framework.go:1316] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-58667487b6-v7x5t\": pod busybox-58667487b6-v7x5t is already assigned to node \"ha-783738-m04\"" plugin="DefaultBinder" pod="default/busybox-58667487b6-v7x5t" node="ha-783738-m04"
	E0217 11:56:05.971236       1 schedule_one.go:359] "scheduler cache ForgetPod failed" err="pod c5148a30-9b13-42ed-87c8-723413b074d3(default/busybox-58667487b6-v7x5t) wasn't assumed so cannot be forgotten" pod="default/busybox-58667487b6-v7x5t"
	E0217 11:56:05.971303       1 schedule_one.go:1058] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-58667487b6-v7x5t\": pod busybox-58667487b6-v7x5t is already assigned to node \"ha-783738-m04\"" pod="default/busybox-58667487b6-v7x5t"
	I0217 11:56:05.971509       1 schedule_one.go:1071] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-58667487b6-v7x5t" node="ha-783738-m04"
	E0217 11:56:37.999387       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [d524d25a3256] <==
	E0217 11:58:26.313559       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.249:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.249:37922->192.168.39.249:8443: read: connection reset by peer" logger="UnhandledError"
	W0217 11:58:26.313700       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.249:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.249:37926->192.168.39.249:8443: read: connection reset by peer
	E0217 11:58:26.313773       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.249:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.249:37926->192.168.39.249:8443: read: connection reset by peer" logger="UnhandledError"
	W0217 11:58:26.313906       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.249:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.249:37956->192.168.39.249:8443: read: connection reset by peer
	E0217 11:58:26.313971       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.249:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.249:37956->192.168.39.249:8443: read: connection reset by peer" logger="UnhandledError"
	W0217 11:58:26.314101       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.249:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.249:37960->192.168.39.249:8443: read: connection reset by peer
	E0217 11:58:26.314185       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.249:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.249:37960->192.168.39.249:8443: read: connection reset by peer" logger="UnhandledError"
	W0217 11:58:26.314462       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.249:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.249:37888->192.168.39.249:8443: read: connection reset by peer
	E0217 11:58:26.314547       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.249:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.249:37888->192.168.39.249:8443: read: connection reset by peer" logger="UnhandledError"
	W0217 11:58:26.314713       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.249:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.249:37930->192.168.39.249:8443: read: connection reset by peer
	E0217 11:58:26.314798       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.249:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.249:37930->192.168.39.249:8443: read: connection reset by peer" logger="UnhandledError"
	W0217 11:58:26.314960       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.249:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.249:37948->192.168.39.249:8443: read: connection reset by peer
	E0217 11:58:26.315166       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.249:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.249:37948->192.168.39.249:8443: read: connection reset by peer" logger="UnhandledError"
	W0217 11:58:26.315243       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: Get "https://192.168.39.249:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.249:37940->192.168.39.249:8443: read: connection reset by peer
	E0217 11:58:26.315352       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.249:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.249:37940->192.168.39.249:8443: read: connection reset by peer" logger="UnhandledError"
	W0217 11:58:29.432094       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.249:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0217 11:58:29.432235       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.249:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0217 11:58:32.758441       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.249:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0217 11:58:32.758583       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.249:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0217 11:58:33.069242       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.249:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0217 11:58:33.069380       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.249:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0217 11:58:35.727701       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.249:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0217 11:58:35.727922       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.249:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0217 11:58:36.974377       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.249:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0217 11:58:36.974419       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.249:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	
	
	==> kubelet <==
	Feb 17 11:58:26 ha-783738 kubelet[1591]: I0217 11:58:26.495253    1591 scope.go:117] "RemoveContainer" containerID="0d8dd6abc6b0262f0e2de062685df6bbc87187dd14023d0fd12b894f48bd2001"
	Feb 17 11:58:26 ha-783738 kubelet[1591]: E0217 11:58:26.495523    1591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-783738_kube-system(77f0e47471ffa89381403ccfd101e5e7)\"" pod="kube-system/kube-apiserver-ha-783738" podUID="77f0e47471ffa89381403ccfd101e5e7"
	Feb 17 11:58:26 ha-783738 kubelet[1591]: E0217 11:58:26.703334    1591 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-783738\" not found"
	Feb 17 11:58:27 ha-783738 kubelet[1591]: E0217 11:58:27.238622    1591 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host" event="&Event{ObjectMeta:{ha-783738.1824fce9ab5e06e9  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-783738,UID:ha-783738,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ha-783738,},FirstTimestamp:2025-02-17 11:57:16.604499689 +0000 UTC m=+0.220042798,LastTimestamp:2025-02-17 11:57:16.604499689 +0000 UTC m=+0.220042798,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-783738,}"
	Feb 17 11:58:30 ha-783738 kubelet[1591]: E0217 11:58:30.957653    1591 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-783738\" not found" node="ha-783738"
	Feb 17 11:58:30 ha-783738 kubelet[1591]: I0217 11:58:30.957784    1591 scope.go:117] "RemoveContainer" containerID="0d8dd6abc6b0262f0e2de062685df6bbc87187dd14023d0fd12b894f48bd2001"
	Feb 17 11:58:30 ha-783738 kubelet[1591]: E0217 11:58:30.957928    1591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-783738_kube-system(77f0e47471ffa89381403ccfd101e5e7)\"" pod="kube-system/kube-apiserver-ha-783738" podUID="77f0e47471ffa89381403ccfd101e5e7"
	Feb 17 11:58:31 ha-783738 kubelet[1591]: I0217 11:58:31.169391    1591 kubelet_node_status.go:76] "Attempting to register node" node="ha-783738"
	Feb 17 11:58:32 ha-783738 kubelet[1591]: E0217 11:58:32.182236    1591 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-783738\" not found" node="ha-783738"
	Feb 17 11:58:32 ha-783738 kubelet[1591]: I0217 11:58:32.182362    1591 scope.go:117] "RemoveContainer" containerID="2e90f752fdc0601abb5401e228fa8355b97462cfd9f4dafb766f56eaf8e7b13a"
	Feb 17 11:58:32 ha-783738 kubelet[1591]: E0217 11:58:32.182489    1591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-783738_kube-system(37cb2af166ca362ca24afd5a80241d47)\"" pod="kube-system/kube-controller-manager-ha-783738" podUID="37cb2af166ca362ca24afd5a80241d47"
	Feb 17 11:58:33 ha-783738 kubelet[1591]: E0217 11:58:33.382650    1591 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.254:8443: connect: no route to host" node="ha-783738"
	Feb 17 11:58:33 ha-783738 kubelet[1591]: E0217 11:58:33.382815    1591 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-783738?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host" interval="7s"
	Feb 17 11:58:33 ha-783738 kubelet[1591]: W0217 11:58:33.382655    1591 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-783738&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	Feb 17 11:58:33 ha-783738 kubelet[1591]: E0217 11:58:33.383127    1591 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-783738&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	Feb 17 11:58:36 ha-783738 kubelet[1591]: E0217 11:58:36.704343    1591 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-783738\" not found"
	Feb 17 11:58:37 ha-783738 kubelet[1591]: E0217 11:58:37.748003    1591 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-783738\" not found" node="ha-783738"
	Feb 17 11:58:39 ha-783738 kubelet[1591]: E0217 11:58:39.526616    1591 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host" event="&Event{ObjectMeta:{ha-783738.1824fce9ab5e06e9  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-783738,UID:ha-783738,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ha-783738,},FirstTimestamp:2025-02-17 11:57:16.604499689 +0000 UTC m=+0.220042798,LastTimestamp:2025-02-17 11:57:16.604499689 +0000 UTC m=+0.220042798,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-783738,}"
	Feb 17 11:58:39 ha-783738 kubelet[1591]: E0217 11:58:39.748034    1591 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-783738\" not found" node="ha-783738"
	Feb 17 11:58:40 ha-783738 kubelet[1591]: I0217 11:58:40.384759    1591 kubelet_node_status.go:76] "Attempting to register node" node="ha-783738"
	Feb 17 11:58:42 ha-783738 kubelet[1591]: E0217 11:58:42.599676    1591 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.254:8443: connect: no route to host" node="ha-783738"
	Feb 17 11:58:42 ha-783738 kubelet[1591]: E0217 11:58:42.599851    1591 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-783738?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host" interval="7s"
	Feb 17 11:58:43 ha-783738 kubelet[1591]: E0217 11:58:43.747946    1591 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-783738\" not found" node="ha-783738"
	Feb 17 11:58:43 ha-783738 kubelet[1591]: I0217 11:58:43.748020    1591 scope.go:117] "RemoveContainer" containerID="0d8dd6abc6b0262f0e2de062685df6bbc87187dd14023d0fd12b894f48bd2001"
	Feb 17 11:58:43 ha-783738 kubelet[1591]: E0217 11:58:43.748145    1591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-783738_kube-system(77f0e47471ffa89381403ccfd101e5e7)\"" pod="kube-system/kube-apiserver-ha-783738" podUID="77f0e47471ffa89381403ccfd101e5e7"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-783738 -n ha-783738
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-783738 -n ha-783738: exit status 2 (230.915215ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-783738" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (1.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-783738 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p ha-783738 --control-plane -v=7 --alsologtostderr: exit status 103 (298.457976ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-783738-m02 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p ha-783738"

                                                
                                                
-- /stdout --
** stderr ** 
	I0217 11:58:44.410508  101192 out.go:345] Setting OutFile to fd 1 ...
	I0217 11:58:44.410629  101192 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 11:58:44.410639  101192 out.go:358] Setting ErrFile to fd 2...
	I0217 11:58:44.410643  101192 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 11:58:44.410816  101192 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20427-77349/.minikube/bin
	I0217 11:58:44.411060  101192 mustload.go:65] Loading cluster: ha-783738
	I0217 11:58:44.411480  101192 config.go:182] Loaded profile config "ha-783738": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0217 11:58:44.411878  101192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0217 11:58:44.411933  101192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0217 11:58:44.427039  101192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45445
	I0217 11:58:44.427454  101192 main.go:141] libmachine: () Calling .GetVersion
	I0217 11:58:44.428004  101192 main.go:141] libmachine: Using API Version  1
	I0217 11:58:44.428026  101192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0217 11:58:44.428357  101192 main.go:141] libmachine: () Calling .GetMachineName
	I0217 11:58:44.428562  101192 main.go:141] libmachine: (ha-783738) Calling .GetState
	I0217 11:58:44.430138  101192 host.go:66] Checking if "ha-783738" exists ...
	I0217 11:58:44.430412  101192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0217 11:58:44.430451  101192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0217 11:58:44.445309  101192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45493
	I0217 11:58:44.445723  101192 main.go:141] libmachine: () Calling .GetVersion
	I0217 11:58:44.446173  101192 main.go:141] libmachine: Using API Version  1
	I0217 11:58:44.446194  101192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0217 11:58:44.446446  101192 main.go:141] libmachine: () Calling .GetMachineName
	I0217 11:58:44.446633  101192 main.go:141] libmachine: (ha-783738) Calling .DriverName
	I0217 11:58:44.447071  101192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0217 11:58:44.447106  101192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0217 11:58:44.461166  101192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39711
	I0217 11:58:44.461627  101192 main.go:141] libmachine: () Calling .GetVersion
	I0217 11:58:44.462112  101192 main.go:141] libmachine: Using API Version  1
	I0217 11:58:44.462127  101192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0217 11:58:44.462430  101192 main.go:141] libmachine: () Calling .GetMachineName
	I0217 11:58:44.462609  101192 main.go:141] libmachine: (ha-783738-m02) Calling .GetState
	I0217 11:58:44.464014  101192 host.go:66] Checking if "ha-783738-m02" exists ...
	I0217 11:58:44.464457  101192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0217 11:58:44.464504  101192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0217 11:58:44.479442  101192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45941
	I0217 11:58:44.479883  101192 main.go:141] libmachine: () Calling .GetVersion
	I0217 11:58:44.480377  101192 main.go:141] libmachine: Using API Version  1
	I0217 11:58:44.480397  101192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0217 11:58:44.480730  101192 main.go:141] libmachine: () Calling .GetMachineName
	I0217 11:58:44.480898  101192 main.go:141] libmachine: (ha-783738-m02) Calling .DriverName
	I0217 11:58:44.481069  101192 api_server.go:166] Checking apiserver status ...
	I0217 11:58:44.481142  101192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0217 11:58:44.481179  101192 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:58:44.484063  101192 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:58:44.484515  101192 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:58:44.484545  101192 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:58:44.484859  101192 main.go:141] libmachine: (ha-783738) Calling .GetSSHPort
	I0217 11:58:44.485075  101192 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:58:44.485252  101192 main.go:141] libmachine: (ha-783738) Calling .GetSSHUsername
	I0217 11:58:44.485443  101192 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738/id_rsa Username:docker}
	W0217 11:58:44.570385  101192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	W0217 11:58:44.570680  101192 out.go:270] ! The control-plane node ha-783738 apiserver is not running (will try others): (state=Stopped)
	! The control-plane node ha-783738 apiserver is not running (will try others): (state=Stopped)
	I0217 11:58:44.570697  101192 api_server.go:166] Checking apiserver status ...
	I0217 11:58:44.570737  101192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0217 11:58:44.570761  101192 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	I0217 11:58:44.573934  101192 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:58:44.574398  101192 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:58:44.574439  101192 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:58:44.574529  101192 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHPort
	I0217 11:58:44.574735  101192 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:58:44.574888  101192 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHUsername
	I0217 11:58:44.575039  101192 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738-m02/id_rsa Username:docker}
	W0217 11:58:44.657152  101192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0217 11:58:44.659507  101192 out.go:177] * The control-plane node ha-783738-m02 apiserver is not running: (state=Stopped)
	I0217 11:58:44.661244  101192 out.go:177]   To start a cluster, run: "minikube start -p ha-783738"

                                                
                                                
** /stderr **
ha_test.go:609: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-linux-amd64 node add -p ha-783738 --control-plane -v=7 --alsologtostderr" : exit status 103
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-783738 -n ha-783738
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-783738 -n ha-783738: exit status 2 (220.716494ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 logs -n 25
helpers_test.go:252: TestMultiControlPlane/serial/AddSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-783738 ssh -n                                                                 | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:50 UTC | 17 Feb 25 11:50 UTC |
	|         | ha-783738-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-783738 ssh -n ha-783738-m04 sudo cat                                          | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:50 UTC | 17 Feb 25 11:50 UTC |
	|         | /home/docker/cp-test_ha-783738-m03_ha-783738-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-783738 cp testdata/cp-test.txt                                                | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:50 UTC | 17 Feb 25 11:50 UTC |
	|         | ha-783738-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-783738 ssh -n                                                                 | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:50 UTC | 17 Feb 25 11:50 UTC |
	|         | ha-783738-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-783738 cp ha-783738-m04:/home/docker/cp-test.txt                              | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:50 UTC | 17 Feb 25 11:51 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3703533036/001/cp-test_ha-783738-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-783738 ssh -n                                                                 | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:51 UTC | 17 Feb 25 11:51 UTC |
	|         | ha-783738-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-783738 cp ha-783738-m04:/home/docker/cp-test.txt                              | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:51 UTC | 17 Feb 25 11:51 UTC |
	|         | ha-783738:/home/docker/cp-test_ha-783738-m04_ha-783738.txt                       |           |         |         |                     |                     |
	| ssh     | ha-783738 ssh -n                                                                 | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:51 UTC | 17 Feb 25 11:51 UTC |
	|         | ha-783738-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-783738 ssh -n ha-783738 sudo cat                                              | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:51 UTC | 17 Feb 25 11:51 UTC |
	|         | /home/docker/cp-test_ha-783738-m04_ha-783738.txt                                 |           |         |         |                     |                     |
	| cp      | ha-783738 cp ha-783738-m04:/home/docker/cp-test.txt                              | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:51 UTC | 17 Feb 25 11:51 UTC |
	|         | ha-783738-m02:/home/docker/cp-test_ha-783738-m04_ha-783738-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-783738 ssh -n                                                                 | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:51 UTC | 17 Feb 25 11:51 UTC |
	|         | ha-783738-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-783738 ssh -n ha-783738-m02 sudo cat                                          | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:51 UTC | 17 Feb 25 11:51 UTC |
	|         | /home/docker/cp-test_ha-783738-m04_ha-783738-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-783738 cp ha-783738-m04:/home/docker/cp-test.txt                              | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:51 UTC | 17 Feb 25 11:51 UTC |
	|         | ha-783738-m03:/home/docker/cp-test_ha-783738-m04_ha-783738-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-783738 ssh -n                                                                 | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:51 UTC | 17 Feb 25 11:51 UTC |
	|         | ha-783738-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-783738 ssh -n ha-783738-m03 sudo cat                                          | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:51 UTC | 17 Feb 25 11:51 UTC |
	|         | /home/docker/cp-test_ha-783738-m04_ha-783738-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-783738 node stop m02 -v=7                                                     | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:51 UTC | 17 Feb 25 11:51 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-783738 node start m02 -v=7                                                    | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:51 UTC | 17 Feb 25 11:51 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-783738 -v=7                                                           | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:52 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-783738 -v=7                                                                | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:52 UTC | 17 Feb 25 11:52 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-783738 --wait=true -v=7                                                    | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:52 UTC | 17 Feb 25 11:56 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-783738                                                                | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:56 UTC |                     |
	| node    | ha-783738 node delete m03 -v=7                                                   | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:56 UTC | 17 Feb 25 11:56 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-783738 stop -v=7                                                              | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:56 UTC | 17 Feb 25 11:56 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-783738 --wait=true                                                         | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:56 UTC |                     |
	|         | -v=7 --alsologtostderr                                                           |           |         |         |                     |                     |
	|         | --driver=kvm2                                                                    |           |         |         |                     |                     |
	| node    | add -p ha-783738                                                                 | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:58 UTC |                     |
	|         | --control-plane -v=7                                                             |           |         |         |                     |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/17 11:56:50
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0217 11:56:50.215291  100380 out.go:345] Setting OutFile to fd 1 ...
	I0217 11:56:50.215609  100380 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 11:56:50.215619  100380 out.go:358] Setting ErrFile to fd 2...
	I0217 11:56:50.215624  100380 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 11:56:50.215819  100380 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20427-77349/.minikube/bin
	I0217 11:56:50.216353  100380 out.go:352] Setting JSON to false
	I0217 11:56:50.217237  100380 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":5958,"bootTime":1739787452,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0217 11:56:50.217362  100380 start.go:139] virtualization: kvm guest
	I0217 11:56:50.219910  100380 out.go:177] * [ha-783738] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0217 11:56:50.221323  100380 out.go:177]   - MINIKUBE_LOCATION=20427
	I0217 11:56:50.221334  100380 notify.go:220] Checking for updates...
	I0217 11:56:50.223835  100380 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0217 11:56:50.224954  100380 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20427-77349/kubeconfig
	I0217 11:56:50.226180  100380 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20427-77349/.minikube
	I0217 11:56:50.227361  100380 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0217 11:56:50.228473  100380 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0217 11:56:50.229885  100380 config.go:182] Loaded profile config "ha-783738": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0217 11:56:50.230261  100380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0217 11:56:50.230308  100380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0217 11:56:50.245239  100380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46091
	I0217 11:56:50.245761  100380 main.go:141] libmachine: () Calling .GetVersion
	I0217 11:56:50.246359  100380 main.go:141] libmachine: Using API Version  1
	I0217 11:56:50.246382  100380 main.go:141] libmachine: () Calling .SetConfigRaw
	I0217 11:56:50.246775  100380 main.go:141] libmachine: () Calling .GetMachineName
	I0217 11:56:50.246962  100380 main.go:141] libmachine: (ha-783738) Calling .DriverName
	I0217 11:56:50.247230  100380 driver.go:394] Setting default libvirt URI to qemu:///system
	I0217 11:56:50.247538  100380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0217 11:56:50.247594  100380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0217 11:56:50.262713  100380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36011
	I0217 11:56:50.263097  100380 main.go:141] libmachine: () Calling .GetVersion
	I0217 11:56:50.263692  100380 main.go:141] libmachine: Using API Version  1
	I0217 11:56:50.263752  100380 main.go:141] libmachine: () Calling .SetConfigRaw
	I0217 11:56:50.264059  100380 main.go:141] libmachine: () Calling .GetMachineName
	I0217 11:56:50.264289  100380 main.go:141] libmachine: (ha-783738) Calling .DriverName
	I0217 11:56:50.297981  100380 out.go:177] * Using the kvm2 driver based on existing profile
	I0217 11:56:50.299143  100380 start.go:297] selected driver: kvm2
	I0217 11:56:50.299155  100380 start.go:901] validating driver "kvm2" against &{Name:ha-783738 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-78
3738 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.31 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.168 Port:0 KubernetesVersion:v1.32.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0217 11:56:50.299304  100380 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0217 11:56:50.299646  100380 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0217 11:56:50.299706  100380 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20427-77349/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0217 11:56:50.314229  100380 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0217 11:56:50.314917  100380 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0217 11:56:50.314949  100380 cni.go:84] Creating CNI manager for ""
	I0217 11:56:50.315000  100380 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0217 11:56:50.315060  100380 start.go:340] cluster config:
	{Name:ha-783738 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-783738 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.31 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.168 Port:0 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kub
eflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0217 11:56:50.315190  100380 iso.go:125] acquiring lock: {Name:mk4380b7bda8fcd8bced9705ff1695c3fb7dac0d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0217 11:56:50.317519  100380 out.go:177] * Starting "ha-783738" primary control-plane node in "ha-783738" cluster
	I0217 11:56:50.318547  100380 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0217 11:56:50.318578  100380 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20427-77349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4
	I0217 11:56:50.318588  100380 cache.go:56] Caching tarball of preloaded images
	I0217 11:56:50.318681  100380 preload.go:172] Found /home/jenkins/minikube-integration/20427-77349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0217 11:56:50.318695  100380 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on docker
	I0217 11:56:50.318829  100380 profile.go:143] Saving config to /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/config.json ...
	I0217 11:56:50.319009  100380 start.go:360] acquireMachinesLock for ha-783738: {Name:mk05ba8323ae77ab7dcc14c378d65810d956fdc0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0217 11:56:50.319055  100380 start.go:364] duration metric: took 23.519µs to acquireMachinesLock for "ha-783738"
	I0217 11:56:50.319080  100380 start.go:96] Skipping create...Using existing machine configuration
	I0217 11:56:50.319088  100380 fix.go:54] fixHost starting: 
	I0217 11:56:50.319353  100380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0217 11:56:50.319391  100380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0217 11:56:50.333761  100380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34803
	I0217 11:56:50.334152  100380 main.go:141] libmachine: () Calling .GetVersion
	I0217 11:56:50.334693  100380 main.go:141] libmachine: Using API Version  1
	I0217 11:56:50.334714  100380 main.go:141] libmachine: () Calling .SetConfigRaw
	I0217 11:56:50.335000  100380 main.go:141] libmachine: () Calling .GetMachineName
	I0217 11:56:50.335210  100380 main.go:141] libmachine: (ha-783738) Calling .DriverName
	I0217 11:56:50.335347  100380 main.go:141] libmachine: (ha-783738) Calling .GetState
	I0217 11:56:50.336730  100380 fix.go:112] recreateIfNeeded on ha-783738: state=Stopped err=<nil>
	I0217 11:56:50.336752  100380 main.go:141] libmachine: (ha-783738) Calling .DriverName
	W0217 11:56:50.336864  100380 fix.go:138] unexpected machine state, will restart: <nil>
	I0217 11:56:50.338814  100380 out.go:177] * Restarting existing kvm2 VM for "ha-783738" ...
	I0217 11:56:50.340020  100380 main.go:141] libmachine: (ha-783738) Calling .Start
	I0217 11:56:50.340200  100380 main.go:141] libmachine: (ha-783738) starting domain...
	I0217 11:56:50.340221  100380 main.go:141] libmachine: (ha-783738) ensuring networks are active...
	I0217 11:56:50.340845  100380 main.go:141] libmachine: (ha-783738) Ensuring network default is active
	I0217 11:56:50.341268  100380 main.go:141] libmachine: (ha-783738) Ensuring network mk-ha-783738 is active
	I0217 11:56:50.341612  100380 main.go:141] libmachine: (ha-783738) getting domain XML...
	I0217 11:56:50.342286  100380 main.go:141] libmachine: (ha-783738) creating domain...
	I0217 11:56:51.533335  100380 main.go:141] libmachine: (ha-783738) waiting for IP...
	I0217 11:56:51.534198  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:56:51.534571  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:56:51.534631  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:56:51.534554  100416 retry.go:31] will retry after 214.112758ms: waiting for domain to come up
	I0217 11:56:51.750038  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:56:51.750535  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:56:51.750587  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:56:51.750528  100416 retry.go:31] will retry after 287.575076ms: waiting for domain to come up
	I0217 11:56:52.040019  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:56:52.040473  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:56:52.040515  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:56:52.040452  100416 retry.go:31] will retry after 303.389275ms: waiting for domain to come up
	I0217 11:56:52.345057  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:56:52.345400  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:56:52.345452  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:56:52.345383  100416 retry.go:31] will retry after 580.610288ms: waiting for domain to come up
	I0217 11:56:52.927102  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:56:52.927623  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:56:52.927663  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:56:52.927596  100416 retry.go:31] will retry after 470.88869ms: waiting for domain to come up
	I0217 11:56:53.400293  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:56:53.400698  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:56:53.400725  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:56:53.400636  100416 retry.go:31] will retry after 645.102407ms: waiting for domain to come up
	I0217 11:56:54.046798  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:56:54.047309  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:56:54.047365  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:56:54.047265  100416 retry.go:31] will retry after 993.016218ms: waiting for domain to come up
	I0217 11:56:55.041450  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:56:55.041808  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:56:55.041828  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:56:55.041790  100416 retry.go:31] will retry after 1.096274529s: waiting for domain to come up
	I0217 11:56:56.139475  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:56:56.139892  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:56:56.139957  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:56:56.139882  100416 retry.go:31] will retry after 1.840421804s: waiting for domain to come up
	I0217 11:56:57.981618  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:56:57.982040  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:56:57.982068  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:56:57.981979  100416 retry.go:31] will retry after 1.8969141s: waiting for domain to come up
	I0217 11:56:59.881026  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:56:59.881535  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:56:59.881570  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:56:59.881471  100416 retry.go:31] will retry after 1.890240518s: waiting for domain to come up
	I0217 11:57:01.773274  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:01.773728  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:57:01.773779  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:57:01.773696  100416 retry.go:31] will retry after 3.046762911s: waiting for domain to come up
	I0217 11:57:04.823999  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:04.824458  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:57:04.824497  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:57:04.824453  100416 retry.go:31] will retry after 3.819063496s: waiting for domain to come up
	I0217 11:57:08.647831  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:08.648309  100380 main.go:141] libmachine: (ha-783738) found domain IP: 192.168.39.249
	I0217 11:57:08.648334  100380 main.go:141] libmachine: (ha-783738) reserving static IP address...
	I0217 11:57:08.648347  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has current primary IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:08.648799  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "ha-783738", mac: "52:54:00:fb:6f:65", ip: "192.168.39.249"} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:08.648824  100380 main.go:141] libmachine: (ha-783738) DBG | skip adding static IP to network mk-ha-783738 - found existing host DHCP lease matching {name: "ha-783738", mac: "52:54:00:fb:6f:65", ip: "192.168.39.249"}
	I0217 11:57:08.648835  100380 main.go:141] libmachine: (ha-783738) reserved static IP address 192.168.39.249 for domain ha-783738
	I0217 11:57:08.648846  100380 main.go:141] libmachine: (ha-783738) waiting for SSH...
	I0217 11:57:08.648862  100380 main.go:141] libmachine: (ha-783738) DBG | Getting to WaitForSSH function...
	I0217 11:57:08.650828  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:08.651193  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:08.651224  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:08.651387  100380 main.go:141] libmachine: (ha-783738) DBG | Using SSH client type: external
	I0217 11:57:08.651414  100380 main.go:141] libmachine: (ha-783738) DBG | Using SSH private key: /home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738/id_rsa (-rw-------)
	I0217 11:57:08.651435  100380 main.go:141] libmachine: (ha-783738) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.249 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0217 11:57:08.651464  100380 main.go:141] libmachine: (ha-783738) DBG | About to run SSH command:
	I0217 11:57:08.651480  100380 main.go:141] libmachine: (ha-783738) DBG | exit 0
	I0217 11:57:08.776922  100380 main.go:141] libmachine: (ha-783738) DBG | SSH cmd err, output: <nil>: 
	I0217 11:57:08.777326  100380 main.go:141] libmachine: (ha-783738) Calling .GetConfigRaw
	I0217 11:57:08.777959  100380 main.go:141] libmachine: (ha-783738) Calling .GetIP
	I0217 11:57:08.780301  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:08.780692  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:08.780735  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:08.780948  100380 profile.go:143] Saving config to /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/config.json ...
	I0217 11:57:08.781137  100380 machine.go:93] provisionDockerMachine start ...
	I0217 11:57:08.781154  100380 main.go:141] libmachine: (ha-783738) Calling .DriverName
	I0217 11:57:08.781442  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:08.783478  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:08.783868  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:08.783897  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:08.784048  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHPort
	I0217 11:57:08.784237  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:08.784393  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:08.784570  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHUsername
	I0217 11:57:08.784738  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:08.784917  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0217 11:57:08.784928  100380 main.go:141] libmachine: About to run SSH command:
	hostname
	I0217 11:57:08.889484  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0217 11:57:08.889525  100380 main.go:141] libmachine: (ha-783738) Calling .GetMachineName
	I0217 11:57:08.889783  100380 buildroot.go:166] provisioning hostname "ha-783738"
	I0217 11:57:08.889818  100380 main.go:141] libmachine: (ha-783738) Calling .GetMachineName
	I0217 11:57:08.890003  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:08.892666  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:08.893027  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:08.893060  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:08.893202  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHPort
	I0217 11:57:08.893391  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:08.893536  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:08.893661  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHUsername
	I0217 11:57:08.893787  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:08.893949  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0217 11:57:08.893960  100380 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-783738 && echo "ha-783738" | sudo tee /etc/hostname
	I0217 11:57:09.014626  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-783738
	
	I0217 11:57:09.014653  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:09.017274  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.017710  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:09.017744  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.017939  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHPort
	I0217 11:57:09.018131  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:09.018348  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:09.018473  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHUsername
	I0217 11:57:09.018701  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:09.018967  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0217 11:57:09.018994  100380 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-783738' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-783738/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-783738' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0217 11:57:09.133208  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0217 11:57:09.133247  100380 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20427-77349/.minikube CaCertPath:/home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20427-77349/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20427-77349/.minikube}
	I0217 11:57:09.133278  100380 buildroot.go:174] setting up certificates
	I0217 11:57:09.133295  100380 provision.go:84] configureAuth start
	I0217 11:57:09.133331  100380 main.go:141] libmachine: (ha-783738) Calling .GetMachineName
	I0217 11:57:09.133680  100380 main.go:141] libmachine: (ha-783738) Calling .GetIP
	I0217 11:57:09.136393  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.136746  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:09.136771  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.136918  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:09.139192  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.139545  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:09.139583  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.139699  100380 provision.go:143] copyHostCerts
	I0217 11:57:09.139734  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20427-77349/.minikube/ca.pem
	I0217 11:57:09.139786  100380 exec_runner.go:144] found /home/jenkins/minikube-integration/20427-77349/.minikube/ca.pem, removing ...
	I0217 11:57:09.139804  100380 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20427-77349/.minikube/ca.pem
	I0217 11:57:09.139883  100380 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20427-77349/.minikube/ca.pem (1082 bytes)
	I0217 11:57:09.139996  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20427-77349/.minikube/cert.pem
	I0217 11:57:09.140023  100380 exec_runner.go:144] found /home/jenkins/minikube-integration/20427-77349/.minikube/cert.pem, removing ...
	I0217 11:57:09.140030  100380 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20427-77349/.minikube/cert.pem
	I0217 11:57:09.140079  100380 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20427-77349/.minikube/cert.pem (1123 bytes)
	I0217 11:57:09.140159  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20427-77349/.minikube/key.pem
	I0217 11:57:09.140184  100380 exec_runner.go:144] found /home/jenkins/minikube-integration/20427-77349/.minikube/key.pem, removing ...
	I0217 11:57:09.140191  100380 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20427-77349/.minikube/key.pem
	I0217 11:57:09.140228  100380 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20427-77349/.minikube/key.pem (1675 bytes)
	I0217 11:57:09.140314  100380 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20427-77349/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca-key.pem org=jenkins.ha-783738 san=[127.0.0.1 192.168.39.249 ha-783738 localhost minikube]
	I0217 11:57:09.269804  100380 provision.go:177] copyRemoteCerts
	I0217 11:57:09.269900  100380 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0217 11:57:09.269935  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:09.272592  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.272916  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:09.272945  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.273095  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHPort
	I0217 11:57:09.273282  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:09.273464  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHUsername
	I0217 11:57:09.273600  100380 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738/id_rsa Username:docker}
	I0217 11:57:09.355256  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0217 11:57:09.355331  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0217 11:57:09.378132  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0217 11:57:09.378243  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0217 11:57:09.399749  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0217 11:57:09.399830  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0217 11:57:09.421183  100380 provision.go:87] duration metric: took 287.855291ms to configureAuth
	I0217 11:57:09.421207  100380 buildroot.go:189] setting minikube options for container-runtime
	I0217 11:57:09.421432  100380 config.go:182] Loaded profile config "ha-783738": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0217 11:57:09.421460  100380 main.go:141] libmachine: (ha-783738) Calling .DriverName
	I0217 11:57:09.421765  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:09.424701  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.425141  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:09.425173  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.425370  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHPort
	I0217 11:57:09.425557  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:09.425734  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:09.425883  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHUsername
	I0217 11:57:09.426059  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:09.426283  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0217 11:57:09.426297  100380 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0217 11:57:09.534976  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0217 11:57:09.535006  100380 buildroot.go:70] root file system type: tmpfs
	I0217 11:57:09.535125  100380 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0217 11:57:09.535163  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:09.537739  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.538108  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:09.538126  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.538307  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHPort
	I0217 11:57:09.538481  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:09.538662  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:09.538802  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHUsername
	I0217 11:57:09.538949  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:09.539142  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0217 11:57:09.539243  100380 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0217 11:57:09.658326  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0217 11:57:09.658371  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:09.661372  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.661838  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:09.661875  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.662085  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHPort
	I0217 11:57:09.662300  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:09.662435  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:09.662559  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHUsername
	I0217 11:57:09.662707  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:09.662897  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0217 11:57:09.662913  100380 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0217 11:57:11.588699  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0217 11:57:11.588766  100380 machine.go:96] duration metric: took 2.807616414s to provisionDockerMachine
	I0217 11:57:11.588782  100380 start.go:293] postStartSetup for "ha-783738" (driver="kvm2")
	I0217 11:57:11.588792  100380 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0217 11:57:11.588810  100380 main.go:141] libmachine: (ha-783738) Calling .DriverName
	I0217 11:57:11.589177  100380 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0217 11:57:11.589221  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:11.592192  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:11.592596  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:11.592627  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:11.592785  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHPort
	I0217 11:57:11.592979  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:11.593170  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHUsername
	I0217 11:57:11.593334  100380 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738/id_rsa Username:docker}
	I0217 11:57:11.675232  100380 ssh_runner.go:195] Run: cat /etc/os-release
	I0217 11:57:11.679319  100380 info.go:137] Remote host: Buildroot 2023.02.9
	I0217 11:57:11.679347  100380 filesync.go:126] Scanning /home/jenkins/minikube-integration/20427-77349/.minikube/addons for local assets ...
	I0217 11:57:11.679434  100380 filesync.go:126] Scanning /home/jenkins/minikube-integration/20427-77349/.minikube/files for local assets ...
	I0217 11:57:11.679553  100380 filesync.go:149] local asset: /home/jenkins/minikube-integration/20427-77349/.minikube/files/etc/ssl/certs/845022.pem -> 845022.pem in /etc/ssl/certs
	I0217 11:57:11.679569  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/files/etc/ssl/certs/845022.pem -> /etc/ssl/certs/845022.pem
	I0217 11:57:11.679700  100380 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0217 11:57:11.688596  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/files/etc/ssl/certs/845022.pem --> /etc/ssl/certs/845022.pem (1708 bytes)
	I0217 11:57:11.712948  100380 start.go:296] duration metric: took 124.147315ms for postStartSetup
	I0217 11:57:11.713041  100380 main.go:141] libmachine: (ha-783738) Calling .DriverName
	I0217 11:57:11.713388  100380 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0217 11:57:11.713431  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:11.716109  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:11.716482  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:11.716509  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:11.716697  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHPort
	I0217 11:57:11.716902  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:11.717111  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHUsername
	I0217 11:57:11.717237  100380 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738/id_rsa Username:docker}
	I0217 11:57:11.799568  100380 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0217 11:57:11.799647  100380 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0217 11:57:11.840659  100380 fix.go:56] duration metric: took 21.521561421s for fixHost
	I0217 11:57:11.840710  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:11.843711  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:11.844159  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:11.844211  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:11.844334  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHPort
	I0217 11:57:11.844538  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:11.844685  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:11.844877  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHUsername
	I0217 11:57:11.845064  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:11.845292  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0217 11:57:11.845324  100380 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0217 11:57:11.961693  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739793431.919777749
	
	I0217 11:57:11.961720  100380 fix.go:216] guest clock: 1739793431.919777749
	I0217 11:57:11.961728  100380 fix.go:229] Guest: 2025-02-17 11:57:11.919777749 +0000 UTC Remote: 2025-02-17 11:57:11.840688548 +0000 UTC m=+21.663425668 (delta=79.089201ms)
	I0217 11:57:11.961764  100380 fix.go:200] guest clock delta is within tolerance: 79.089201ms
	I0217 11:57:11.961771  100380 start.go:83] releasing machines lock for "ha-783738", held for 21.642703542s
	I0217 11:57:11.961797  100380 main.go:141] libmachine: (ha-783738) Calling .DriverName
	I0217 11:57:11.962076  100380 main.go:141] libmachine: (ha-783738) Calling .GetIP
	I0217 11:57:11.964739  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:11.965072  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:11.965098  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:11.965245  100380 main.go:141] libmachine: (ha-783738) Calling .DriverName
	I0217 11:57:11.965780  100380 main.go:141] libmachine: (ha-783738) Calling .DriverName
	I0217 11:57:11.965938  100380 main.go:141] libmachine: (ha-783738) Calling .DriverName
	I0217 11:57:11.966020  100380 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0217 11:57:11.966085  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:11.966153  100380 ssh_runner.go:195] Run: cat /version.json
	I0217 11:57:11.966182  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:11.968710  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:11.968814  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:11.969180  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:11.969211  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:11.969228  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:11.969243  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:11.969400  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHPort
	I0217 11:57:11.969505  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHPort
	I0217 11:57:11.969573  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:11.969654  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:11.969705  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHUsername
	I0217 11:57:11.969780  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHUsername
	I0217 11:57:11.969855  100380 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738/id_rsa Username:docker}
	I0217 11:57:11.969896  100380 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738/id_rsa Username:docker}
	I0217 11:57:12.070993  100380 ssh_runner.go:195] Run: systemctl --version
	I0217 11:57:12.076962  100380 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0217 11:57:12.082069  100380 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0217 11:57:12.082164  100380 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0217 11:57:12.097308  100380 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0217 11:57:12.097353  100380 start.go:495] detecting cgroup driver to use...
	I0217 11:57:12.097502  100380 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0217 11:57:12.116857  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0217 11:57:12.128177  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0217 11:57:12.139383  100380 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0217 11:57:12.139433  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0217 11:57:12.150535  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0217 11:57:12.161824  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0217 11:57:12.173075  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0217 11:57:12.184735  100380 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0217 11:57:12.196065  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0217 11:57:12.206061  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0217 11:57:12.215826  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0217 11:57:12.225719  100380 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0217 11:57:12.234589  100380 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0217 11:57:12.234644  100380 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0217 11:57:12.244581  100380 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0217 11:57:12.253602  100380 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0217 11:57:12.359116  100380 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0217 11:57:12.382906  100380 start.go:495] detecting cgroup driver to use...
	I0217 11:57:12.383010  100380 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0217 11:57:12.408300  100380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0217 11:57:12.424027  100380 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0217 11:57:12.444833  100380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0217 11:57:12.457628  100380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0217 11:57:12.470140  100380 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0217 11:57:12.497764  100380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0217 11:57:12.511071  100380 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0217 11:57:12.529141  100380 ssh_runner.go:195] Run: which cri-dockerd
	I0217 11:57:12.532846  100380 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0217 11:57:12.541895  100380 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0217 11:57:12.557198  100380 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0217 11:57:12.670128  100380 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0217 11:57:12.796263  100380 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0217 11:57:12.796399  100380 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0217 11:57:12.812229  100380 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0217 11:57:12.923350  100380 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0217 11:57:15.351609  100380 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.428206669s)
	I0217 11:57:15.351699  100380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0217 11:57:15.364852  100380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0217 11:57:15.377423  100380 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0217 11:57:15.493635  100380 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0217 11:57:15.621524  100380 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0217 11:57:15.730858  100380 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0217 11:57:15.748138  100380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0217 11:57:15.761818  100380 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0217 11:57:15.881775  100380 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0217 11:57:15.960772  100380 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0217 11:57:15.960858  100380 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0217 11:57:15.966411  100380 start.go:563] Will wait 60s for crictl version
	I0217 11:57:15.966517  100380 ssh_runner.go:195] Run: which crictl
	I0217 11:57:15.974036  100380 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0217 11:57:16.011837  100380 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0217 11:57:16.011912  100380 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0217 11:57:16.036945  100380 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0217 11:57:16.060974  100380 out.go:235] * Preparing Kubernetes v1.32.1 on Docker 27.4.0 ...
	I0217 11:57:16.061031  100380 main.go:141] libmachine: (ha-783738) Calling .GetIP
	I0217 11:57:16.063810  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:16.064255  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:16.064298  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:16.064499  100380 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0217 11:57:16.068464  100380 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0217 11:57:16.080668  100380 kubeadm.go:883] updating cluster {Name:ha-783738 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-783738 Namespace:
default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.31 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.168 Port:0 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-
gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0217 11:57:16.080804  100380 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0217 11:57:16.080849  100380 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0217 11:57:16.098890  100380 docker.go:689] Got preloaded images: -- stdout --
	kindest/kindnetd:v20250214-acbabc1a
	registry.k8s.io/kube-apiserver:v1.32.1
	registry.k8s.io/kube-scheduler:v1.32.1
	registry.k8s.io/kube-controller-manager:v1.32.1
	registry.k8s.io/kube-proxy:v1.32.1
	ghcr.io/kube-vip/kube-vip:v0.8.9
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0217 11:57:16.098911  100380 docker.go:619] Images already preloaded, skipping extraction
	I0217 11:57:16.098974  100380 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0217 11:57:16.116506  100380 docker.go:689] Got preloaded images: -- stdout --
	kindest/kindnetd:v20250214-acbabc1a
	registry.k8s.io/kube-apiserver:v1.32.1
	registry.k8s.io/kube-scheduler:v1.32.1
	registry.k8s.io/kube-controller-manager:v1.32.1
	registry.k8s.io/kube-proxy:v1.32.1
	ghcr.io/kube-vip/kube-vip:v0.8.9
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0217 11:57:16.116540  100380 cache_images.go:84] Images are preloaded, skipping loading
	I0217 11:57:16.116556  100380 kubeadm.go:934] updating node { 192.168.39.249 8443 v1.32.1 docker true true} ...
	I0217 11:57:16.116703  100380 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-783738 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.249
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:ha-783738 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0217 11:57:16.116764  100380 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0217 11:57:16.164431  100380 cni.go:84] Creating CNI manager for ""
	I0217 11:57:16.164455  100380 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0217 11:57:16.164469  100380 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0217 11:57:16.164499  100380 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.249 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-783738 NodeName:ha-783738 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.249"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.249 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0217 11:57:16.164682  100380 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.249
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-783738"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.249"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.249"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0217 11:57:16.164704  100380 kube-vip.go:115] generating kube-vip config ...
	I0217 11:57:16.164766  100380 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0217 11:57:16.178981  100380 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0217 11:57:16.179102  100380 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.9
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0217 11:57:16.179161  100380 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0217 11:57:16.189237  100380 binaries.go:44] Found k8s binaries, skipping transfer
	I0217 11:57:16.189321  100380 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0217 11:57:16.198727  100380 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I0217 11:57:16.214787  100380 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0217 11:57:16.231014  100380 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I0217 11:57:16.246729  100380 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0217 11:57:16.261779  100380 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0217 11:57:16.265453  100380 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0217 11:57:16.276521  100380 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0217 11:57:16.384249  100380 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0217 11:57:16.401291  100380 certs.go:68] Setting up /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738 for IP: 192.168.39.249
	I0217 11:57:16.401328  100380 certs.go:194] generating shared ca certs ...
	I0217 11:57:16.401350  100380 certs.go:226] acquiring lock for ca certs: {Name:mk7093571229e43ae88bf2507ccc9fd2cd05388e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0217 11:57:16.401508  100380 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20427-77349/.minikube/ca.key
	I0217 11:57:16.401544  100380 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20427-77349/.minikube/proxy-client-ca.key
	I0217 11:57:16.401555  100380 certs.go:256] generating profile certs ...
	I0217 11:57:16.401635  100380 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/client.key
	I0217 11:57:16.401660  100380 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.key.1b1cbf3b
	I0217 11:57:16.401671  100380 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.crt.1b1cbf3b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.249 192.168.39.31 192.168.39.254]
	I0217 11:57:16.475033  100380 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.crt.1b1cbf3b ...
	I0217 11:57:16.475062  100380 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.crt.1b1cbf3b: {Name:mkcae1f9f128e66451afcd5b133e6826e9862cbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0217 11:57:16.475228  100380 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.key.1b1cbf3b ...
	I0217 11:57:16.475243  100380 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.key.1b1cbf3b: {Name:mk484c481609a3c2ed473dfecb8f5468118b1367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0217 11:57:16.475330  100380 certs.go:381] copying /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.crt.1b1cbf3b -> /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.crt
	I0217 11:57:16.475492  100380 certs.go:385] copying /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.key.1b1cbf3b -> /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.key
	I0217 11:57:16.475629  100380 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/proxy-client.key
	I0217 11:57:16.475644  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0217 11:57:16.475656  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0217 11:57:16.475671  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0217 11:57:16.475699  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0217 11:57:16.475714  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0217 11:57:16.475726  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0217 11:57:16.475737  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0217 11:57:16.475748  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0217 11:57:16.475800  100380 certs.go:484] found cert: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/84502.pem (1338 bytes)
	W0217 11:57:16.475831  100380 certs.go:480] ignoring /home/jenkins/minikube-integration/20427-77349/.minikube/certs/84502_empty.pem, impossibly tiny 0 bytes
	I0217 11:57:16.475839  100380 certs.go:484] found cert: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca-key.pem (1679 bytes)
	I0217 11:57:16.475861  100380 certs.go:484] found cert: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem (1082 bytes)
	I0217 11:57:16.475900  100380 certs.go:484] found cert: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/cert.pem (1123 bytes)
	I0217 11:57:16.475927  100380 certs.go:484] found cert: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/key.pem (1675 bytes)
	I0217 11:57:16.476002  100380 certs.go:484] found cert: /home/jenkins/minikube-integration/20427-77349/.minikube/files/etc/ssl/certs/845022.pem (1708 bytes)
	I0217 11:57:16.476031  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/84502.pem -> /usr/share/ca-certificates/84502.pem
	I0217 11:57:16.476046  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/files/etc/ssl/certs/845022.pem -> /usr/share/ca-certificates/845022.pem
	I0217 11:57:16.476058  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0217 11:57:16.476652  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0217 11:57:16.507138  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0217 11:57:16.534527  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0217 11:57:16.562922  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0217 11:57:16.587311  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0217 11:57:16.624087  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0217 11:57:16.662037  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0217 11:57:16.713619  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0217 11:57:16.756345  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/certs/84502.pem --> /usr/share/ca-certificates/84502.pem (1338 bytes)
	I0217 11:57:16.803520  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/files/etc/ssl/certs/845022.pem --> /usr/share/ca-certificates/845022.pem (1708 bytes)
	I0217 11:57:16.846879  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0217 11:57:16.920267  100380 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0217 11:57:16.950648  100380 ssh_runner.go:195] Run: openssl version
	I0217 11:57:16.958784  100380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84502.pem && ln -fs /usr/share/ca-certificates/84502.pem /etc/ssl/certs/84502.pem"
	I0217 11:57:16.987238  100380 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84502.pem
	I0217 11:57:16.994220  100380 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 17 11:42 /usr/share/ca-certificates/84502.pem
	I0217 11:57:16.994283  100380 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84502.pem
	I0217 11:57:17.016466  100380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84502.pem /etc/ssl/certs/51391683.0"
	I0217 11:57:17.039972  100380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/845022.pem && ln -fs /usr/share/ca-certificates/845022.pem /etc/ssl/certs/845022.pem"
	I0217 11:57:17.061818  100380 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/845022.pem
	I0217 11:57:17.068988  100380 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 17 11:42 /usr/share/ca-certificates/845022.pem
	I0217 11:57:17.069057  100380 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/845022.pem
	I0217 11:57:17.075953  100380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/845022.pem /etc/ssl/certs/3ec20f2e.0"
	I0217 11:57:17.094161  100380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0217 11:57:17.111313  100380 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0217 11:57:17.116268  100380 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 17 11:35 /usr/share/ca-certificates/minikubeCA.pem
	I0217 11:57:17.116335  100380 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0217 11:57:17.122743  100380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0217 11:57:17.141827  100380 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0217 11:57:17.146771  100380 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0217 11:57:17.158301  100380 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0217 11:57:17.170200  100380 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0217 11:57:17.177413  100380 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0217 11:57:17.186556  100380 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0217 11:57:17.193933  100380 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0217 11:57:17.203839  100380 kubeadm.go:392] StartCluster: {Name:ha-783738 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-783738 Namespace:def
ault APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.31 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.168 Port:0 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gad
get:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0217 11:57:17.204089  100380 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0217 11:57:17.225257  100380 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0217 11:57:17.236858  100380 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0217 11:57:17.236876  100380 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0217 11:57:17.236920  100380 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0217 11:57:17.246285  100380 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0217 11:57:17.246828  100380 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-783738" does not appear in /home/jenkins/minikube-integration/20427-77349/kubeconfig
	I0217 11:57:17.246986  100380 kubeconfig.go:62] /home/jenkins/minikube-integration/20427-77349/kubeconfig needs updating (will repair): [kubeconfig missing "ha-783738" cluster setting kubeconfig missing "ha-783738" context setting]
	I0217 11:57:17.247367  100380 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20427-77349/kubeconfig: {Name:mka23a5c17f10bb58374e83755a2ac6a44464e11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0217 11:57:17.247895  100380 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/20427-77349/kubeconfig
	I0217 11:57:17.248117  100380 kapi.go:59] client config for ha-783738: &rest.Config{Host:"https://192.168.39.249:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/client.crt", KeyFile:"/home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/client.key", CAFile:"/home/jenkins/minikube-integration/20427-77349/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24df700), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0217 11:57:17.248591  100380 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0217 11:57:17.248610  100380 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0217 11:57:17.248615  100380 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0217 11:57:17.248619  100380 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0217 11:57:17.248634  100380 cert_rotation.go:140] Starting client certificate rotation controller
	I0217 11:57:17.249054  100380 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0217 11:57:17.258029  100380 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.249
	I0217 11:57:17.258053  100380 kubeadm.go:597] duration metric: took 21.170416ms to restartPrimaryControlPlane
	I0217 11:57:17.258062  100380 kubeadm.go:394] duration metric: took 54.240079ms to StartCluster
	I0217 11:57:17.258077  100380 settings.go:142] acquiring lock: {Name:mkf730c657b1c2d5a481dbeb02dabe7dfa17f2d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0217 11:57:17.258150  100380 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20427-77349/kubeconfig
	I0217 11:57:17.258639  100380 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20427-77349/kubeconfig: {Name:mka23a5c17f10bb58374e83755a2ac6a44464e11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0217 11:57:17.258848  100380 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0217 11:57:17.258870  100380 start.go:241] waiting for startup goroutines ...
	I0217 11:57:17.258884  100380 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0217 11:57:17.259112  100380 config.go:182] Loaded profile config "ha-783738": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0217 11:57:17.261397  100380 out.go:177] * Enabled addons: 
	I0217 11:57:17.262668  100380 addons.go:514] duration metric: took 3.785415ms for enable addons: enabled=[]
	I0217 11:57:17.262703  100380 start.go:246] waiting for cluster config update ...
	I0217 11:57:17.262713  100380 start.go:255] writing updated cluster config ...
	I0217 11:57:17.264127  100380 out.go:201] 
	I0217 11:57:17.265577  100380 config.go:182] Loaded profile config "ha-783738": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0217 11:57:17.265703  100380 profile.go:143] Saving config to /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/config.json ...
	I0217 11:57:17.267570  100380 out.go:177] * Starting "ha-783738-m02" control-plane node in "ha-783738" cluster
	I0217 11:57:17.268921  100380 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0217 11:57:17.268950  100380 cache.go:56] Caching tarball of preloaded images
	I0217 11:57:17.269061  100380 preload.go:172] Found /home/jenkins/minikube-integration/20427-77349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0217 11:57:17.269074  100380 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on docker
	I0217 11:57:17.269250  100380 profile.go:143] Saving config to /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/config.json ...
	I0217 11:57:17.269484  100380 start.go:360] acquireMachinesLock for ha-783738-m02: {Name:mk05ba8323ae77ab7dcc14c378d65810d956fdc0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0217 11:57:17.269554  100380 start.go:364] duration metric: took 46.103µs to acquireMachinesLock for "ha-783738-m02"
	I0217 11:57:17.269576  100380 start.go:96] Skipping create...Using existing machine configuration
	I0217 11:57:17.269584  100380 fix.go:54] fixHost starting: m02
	I0217 11:57:17.269846  100380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0217 11:57:17.269891  100380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0217 11:57:17.284961  100380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45093
	I0217 11:57:17.285438  100380 main.go:141] libmachine: () Calling .GetVersion
	I0217 11:57:17.285964  100380 main.go:141] libmachine: Using API Version  1
	I0217 11:57:17.285991  100380 main.go:141] libmachine: () Calling .SetConfigRaw
	I0217 11:57:17.286358  100380 main.go:141] libmachine: () Calling .GetMachineName
	I0217 11:57:17.286562  100380 main.go:141] libmachine: (ha-783738-m02) Calling .DriverName
	I0217 11:57:17.286744  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetState
	I0217 11:57:17.288288  100380 fix.go:112] recreateIfNeeded on ha-783738-m02: state=Stopped err=<nil>
	I0217 11:57:17.288317  100380 main.go:141] libmachine: (ha-783738-m02) Calling .DriverName
	W0217 11:57:17.288473  100380 fix.go:138] unexpected machine state, will restart: <nil>
	I0217 11:57:17.290496  100380 out.go:177] * Restarting existing kvm2 VM for "ha-783738-m02" ...
	I0217 11:57:17.291737  100380 main.go:141] libmachine: (ha-783738-m02) Calling .Start
	I0217 11:57:17.291936  100380 main.go:141] libmachine: (ha-783738-m02) starting domain...
	I0217 11:57:17.291957  100380 main.go:141] libmachine: (ha-783738-m02) ensuring networks are active...
	I0217 11:57:17.292625  100380 main.go:141] libmachine: (ha-783738-m02) Ensuring network default is active
	I0217 11:57:17.292935  100380 main.go:141] libmachine: (ha-783738-m02) Ensuring network mk-ha-783738 is active
	I0217 11:57:17.293260  100380 main.go:141] libmachine: (ha-783738-m02) getting domain XML...
	I0217 11:57:17.293893  100380 main.go:141] libmachine: (ha-783738-m02) creating domain...
	I0217 11:57:18.506378  100380 main.go:141] libmachine: (ha-783738-m02) waiting for IP...
	I0217 11:57:18.507364  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:18.507881  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:18.507974  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:18.507878  100573 retry.go:31] will retry after 190.071186ms: waiting for domain to come up
	I0217 11:57:18.699203  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:18.699617  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:18.699682  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:18.699590  100573 retry.go:31] will retry after 254.022024ms: waiting for domain to come up
	I0217 11:57:18.955132  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:18.955578  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:18.955602  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:18.955533  100573 retry.go:31] will retry after 332.594264ms: waiting for domain to come up
	I0217 11:57:19.290041  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:19.290494  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:19.290519  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:19.290472  100573 retry.go:31] will retry after 550.484931ms: waiting for domain to come up
	I0217 11:57:19.842363  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:19.842844  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:19.842873  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:19.842822  100573 retry.go:31] will retry after 743.60757ms: waiting for domain to come up
	I0217 11:57:20.587667  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:20.588025  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:20.588058  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:20.587981  100573 retry.go:31] will retry after 701.750144ms: waiting for domain to come up
	I0217 11:57:21.290980  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:21.291500  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:21.291530  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:21.291445  100573 retry.go:31] will retry after 755.313925ms: waiting for domain to come up
	I0217 11:57:22.047876  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:22.048286  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:22.048318  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:22.048246  100573 retry.go:31] will retry after 1.338224716s: waiting for domain to come up
	I0217 11:57:23.388238  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:23.388759  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:23.388796  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:23.388727  100573 retry.go:31] will retry after 1.367661407s: waiting for domain to come up
	I0217 11:57:24.758376  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:24.758722  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:24.758764  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:24.758718  100573 retry.go:31] will retry after 2.08548116s: waiting for domain to come up
	I0217 11:57:26.846621  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:26.847150  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:26.847253  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:26.847166  100573 retry.go:31] will retry after 1.933968455s: waiting for domain to come up
	I0217 11:57:28.782369  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:28.782785  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:28.782815  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:28.782752  100573 retry.go:31] will retry after 3.162167749s: waiting for domain to come up
	I0217 11:57:31.947188  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:31.947578  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:31.947603  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:31.947545  100573 retry.go:31] will retry after 3.924986004s: waiting for domain to come up
	I0217 11:57:35.877102  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:35.877437  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has current primary IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:35.877460  100380 main.go:141] libmachine: (ha-783738-m02) found domain IP: 192.168.39.31
	I0217 11:57:35.877473  100380 main.go:141] libmachine: (ha-783738-m02) reserving static IP address...
	I0217 11:57:35.877915  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "ha-783738-m02", mac: "52:54:00:06:81:a2", ip: "192.168.39.31"} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:35.877942  100380 main.go:141] libmachine: (ha-783738-m02) DBG | skip adding static IP to network mk-ha-783738 - found existing host DHCP lease matching {name: "ha-783738-m02", mac: "52:54:00:06:81:a2", ip: "192.168.39.31"}
	I0217 11:57:35.877960  100380 main.go:141] libmachine: (ha-783738-m02) reserved static IP address 192.168.39.31 for domain ha-783738-m02
	I0217 11:57:35.877972  100380 main.go:141] libmachine: (ha-783738-m02) waiting for SSH...
	I0217 11:57:35.877983  100380 main.go:141] libmachine: (ha-783738-m02) DBG | Getting to WaitForSSH function...
	I0217 11:57:35.880382  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:35.880801  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:35.880830  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:35.880903  100380 main.go:141] libmachine: (ha-783738-m02) DBG | Using SSH client type: external
	I0217 11:57:35.880925  100380 main.go:141] libmachine: (ha-783738-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738-m02/id_rsa (-rw-------)
	I0217 11:57:35.880955  100380 main.go:141] libmachine: (ha-783738-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.31 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0217 11:57:35.880970  100380 main.go:141] libmachine: (ha-783738-m02) DBG | About to run SSH command:
	I0217 11:57:35.880982  100380 main.go:141] libmachine: (ha-783738-m02) DBG | exit 0
	I0217 11:57:36.005182  100380 main.go:141] libmachine: (ha-783738-m02) DBG | SSH cmd err, output: <nil>: 
	I0217 11:57:36.005527  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetConfigRaw
	I0217 11:57:36.006216  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetIP
	I0217 11:57:36.008704  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.009084  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:36.009118  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.009443  100380 profile.go:143] Saving config to /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/config.json ...
	I0217 11:57:36.009639  100380 machine.go:93] provisionDockerMachine start ...
	I0217 11:57:36.009657  100380 main.go:141] libmachine: (ha-783738-m02) Calling .DriverName
	I0217 11:57:36.009816  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	I0217 11:57:36.011849  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.012187  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:36.012218  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.012360  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHPort
	I0217 11:57:36.012557  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.012710  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.012836  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHUsername
	I0217 11:57:36.012947  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:36.013115  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0217 11:57:36.013130  100380 main.go:141] libmachine: About to run SSH command:
	hostname
	I0217 11:57:36.113056  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0217 11:57:36.113093  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetMachineName
	I0217 11:57:36.113376  100380 buildroot.go:166] provisioning hostname "ha-783738-m02"
	I0217 11:57:36.113403  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetMachineName
	I0217 11:57:36.113566  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	I0217 11:57:36.116233  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.116606  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:36.116634  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.116762  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHPort
	I0217 11:57:36.116907  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.117025  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.117242  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHUsername
	I0217 11:57:36.117464  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:36.117681  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0217 11:57:36.117699  100380 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-783738-m02 && echo "ha-783738-m02" | sudo tee /etc/hostname
	I0217 11:57:36.230628  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-783738-m02
	
	I0217 11:57:36.230670  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	I0217 11:57:36.233644  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.233991  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:36.234015  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.234196  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHPort
	I0217 11:57:36.234491  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.234686  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.234856  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHUsername
	I0217 11:57:36.235006  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:36.235194  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0217 11:57:36.235211  100380 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-783738-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-783738-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-783738-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0217 11:57:36.341290  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0217 11:57:36.341332  100380 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20427-77349/.minikube CaCertPath:/home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20427-77349/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20427-77349/.minikube}
	I0217 11:57:36.341348  100380 buildroot.go:174] setting up certificates
	I0217 11:57:36.341360  100380 provision.go:84] configureAuth start
	I0217 11:57:36.341373  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetMachineName
	I0217 11:57:36.341646  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetIP
	I0217 11:57:36.344453  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.344944  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:36.344981  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.345158  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	I0217 11:57:36.347416  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.347719  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:36.347744  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.347910  100380 provision.go:143] copyHostCerts
	I0217 11:57:36.347943  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20427-77349/.minikube/ca.pem
	I0217 11:57:36.347989  100380 exec_runner.go:144] found /home/jenkins/minikube-integration/20427-77349/.minikube/ca.pem, removing ...
	I0217 11:57:36.347999  100380 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20427-77349/.minikube/ca.pem
	I0217 11:57:36.348065  100380 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20427-77349/.minikube/ca.pem (1082 bytes)
	I0217 11:57:36.348156  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20427-77349/.minikube/cert.pem
	I0217 11:57:36.348190  100380 exec_runner.go:144] found /home/jenkins/minikube-integration/20427-77349/.minikube/cert.pem, removing ...
	I0217 11:57:36.348200  100380 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20427-77349/.minikube/cert.pem
	I0217 11:57:36.348229  100380 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20427-77349/.minikube/cert.pem (1123 bytes)
	I0217 11:57:36.348286  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20427-77349/.minikube/key.pem
	I0217 11:57:36.348310  100380 exec_runner.go:144] found /home/jenkins/minikube-integration/20427-77349/.minikube/key.pem, removing ...
	I0217 11:57:36.348320  100380 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20427-77349/.minikube/key.pem
	I0217 11:57:36.348347  100380 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20427-77349/.minikube/key.pem (1675 bytes)
	I0217 11:57:36.348413  100380 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20427-77349/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca-key.pem org=jenkins.ha-783738-m02 san=[127.0.0.1 192.168.39.31 ha-783738-m02 localhost minikube]
	I0217 11:57:36.476199  100380 provision.go:177] copyRemoteCerts
	I0217 11:57:36.476256  100380 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0217 11:57:36.476280  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	I0217 11:57:36.479126  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.479497  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:36.479529  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.479677  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHPort
	I0217 11:57:36.479868  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.480073  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHUsername
	I0217 11:57:36.480258  100380 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738-m02/id_rsa Username:docker}
	I0217 11:57:36.558954  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0217 11:57:36.559023  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0217 11:57:36.581755  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0217 11:57:36.581816  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0217 11:57:36.604328  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0217 11:57:36.604411  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0217 11:57:36.626183  100380 provision.go:87] duration metric: took 284.807453ms to configureAuth
	I0217 11:57:36.626219  100380 buildroot.go:189] setting minikube options for container-runtime
	I0217 11:57:36.626492  100380 config.go:182] Loaded profile config "ha-783738": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0217 11:57:36.626522  100380 main.go:141] libmachine: (ha-783738-m02) Calling .DriverName
	I0217 11:57:36.626768  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	I0217 11:57:36.629194  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.629569  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:36.629594  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.629740  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHPort
	I0217 11:57:36.629904  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.630077  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.630201  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHUsername
	I0217 11:57:36.630389  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:36.630601  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0217 11:57:36.630614  100380 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0217 11:57:36.730964  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0217 11:57:36.730995  100380 buildroot.go:70] root file system type: tmpfs
	I0217 11:57:36.731148  100380 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0217 11:57:36.731184  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	I0217 11:57:36.733718  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.734119  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:36.734150  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.734340  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHPort
	I0217 11:57:36.734539  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.734714  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.734847  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHUsername
	I0217 11:57:36.734986  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:36.735198  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0217 11:57:36.735304  100380 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.39.249"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0217 11:57:36.846599  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.39.249
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0217 11:57:36.846633  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	I0217 11:57:36.849370  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.849714  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:36.849733  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.849923  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHPort
	I0217 11:57:36.850116  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.850290  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.850443  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHUsername
	I0217 11:57:36.850608  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:36.850788  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0217 11:57:36.850805  100380 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0217 11:57:38.700010  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0217 11:57:38.700036  100380 machine.go:96] duration metric: took 2.690384734s to provisionDockerMachine
	I0217 11:57:38.700051  100380 start.go:293] postStartSetup for "ha-783738-m02" (driver="kvm2")
	I0217 11:57:38.700060  100380 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0217 11:57:38.700075  100380 main.go:141] libmachine: (ha-783738-m02) Calling .DriverName
	I0217 11:57:38.700389  100380 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0217 11:57:38.700425  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	I0217 11:57:38.703068  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:38.703435  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:38.703465  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:38.703605  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHPort
	I0217 11:57:38.703807  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:38.703952  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHUsername
	I0217 11:57:38.704102  100380 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738-m02/id_rsa Username:docker}
	I0217 11:57:38.783381  100380 ssh_runner.go:195] Run: cat /etc/os-release
	I0217 11:57:38.787188  100380 info.go:137] Remote host: Buildroot 2023.02.9
	I0217 11:57:38.787215  100380 filesync.go:126] Scanning /home/jenkins/minikube-integration/20427-77349/.minikube/addons for local assets ...
	I0217 11:57:38.787270  100380 filesync.go:126] Scanning /home/jenkins/minikube-integration/20427-77349/.minikube/files for local assets ...
	I0217 11:57:38.787341  100380 filesync.go:149] local asset: /home/jenkins/minikube-integration/20427-77349/.minikube/files/etc/ssl/certs/845022.pem -> 845022.pem in /etc/ssl/certs
	I0217 11:57:38.787352  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/files/etc/ssl/certs/845022.pem -> /etc/ssl/certs/845022.pem
	I0217 11:57:38.787430  100380 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0217 11:57:38.796091  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/files/etc/ssl/certs/845022.pem --> /etc/ssl/certs/845022.pem (1708 bytes)
	I0217 11:57:38.817716  100380 start.go:296] duration metric: took 117.649565ms for postStartSetup
	I0217 11:57:38.817759  100380 main.go:141] libmachine: (ha-783738-m02) Calling .DriverName
	I0217 11:57:38.818052  100380 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0217 11:57:38.818087  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	I0217 11:57:38.820354  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:38.820669  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:38.820694  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:38.820809  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHPort
	I0217 11:57:38.820978  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:38.821138  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHUsername
	I0217 11:57:38.821273  100380 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738-m02/id_rsa Username:docker}
	I0217 11:57:38.900214  100380 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0217 11:57:38.900294  100380 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0217 11:57:38.959273  100380 fix.go:56] duration metric: took 21.689681729s for fixHost
	I0217 11:57:38.959327  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	I0217 11:57:38.961853  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:38.962326  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:38.962364  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:38.962591  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHPort
	I0217 11:57:38.962788  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:38.962952  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:38.963062  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHUsername
	I0217 11:57:38.963238  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:38.963408  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0217 11:57:38.963419  100380 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0217 11:57:39.071315  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739793459.049434891
	
	I0217 11:57:39.071339  100380 fix.go:216] guest clock: 1739793459.049434891
	I0217 11:57:39.071349  100380 fix.go:229] Guest: 2025-02-17 11:57:39.049434891 +0000 UTC Remote: 2025-02-17 11:57:38.959302801 +0000 UTC m=+48.782039917 (delta=90.13209ms)
	I0217 11:57:39.071366  100380 fix.go:200] guest clock delta is within tolerance: 90.13209ms
	I0217 11:57:39.071371  100380 start.go:83] releasing machines lock for "ha-783738-m02", held for 21.801804436s
	I0217 11:57:39.071393  100380 main.go:141] libmachine: (ha-783738-m02) Calling .DriverName
	I0217 11:57:39.071600  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetIP
	I0217 11:57:39.074321  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:39.074707  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:39.074736  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:39.076949  100380 out.go:177] * Found network options:
	I0217 11:57:39.078428  100380 out.go:177]   - NO_PROXY=192.168.39.249
	W0217 11:57:39.079686  100380 proxy.go:119] fail to check proxy env: Error ip not in block
	I0217 11:57:39.079714  100380 main.go:141] libmachine: (ha-783738-m02) Calling .DriverName
	I0217 11:57:39.080218  100380 main.go:141] libmachine: (ha-783738-m02) Calling .DriverName
	I0217 11:57:39.080403  100380 main.go:141] libmachine: (ha-783738-m02) Calling .DriverName
	I0217 11:57:39.080510  100380 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0217 11:57:39.080551  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	W0217 11:57:39.080631  100380 proxy.go:119] fail to check proxy env: Error ip not in block
	I0217 11:57:39.080722  100380 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0217 11:57:39.080748  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	I0217 11:57:39.083432  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:39.083453  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:39.083887  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:39.083914  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:39.083933  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:39.083949  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:39.084264  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHPort
	I0217 11:57:39.084411  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:39.084597  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHUsername
	I0217 11:57:39.084609  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHPort
	I0217 11:57:39.084763  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:39.084784  100380 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738-m02/id_rsa Username:docker}
	I0217 11:57:39.084915  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHUsername
	I0217 11:57:39.085034  100380 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738-m02/id_rsa Username:docker}
	W0217 11:57:39.178061  100380 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0217 11:57:39.178137  100380 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0217 11:57:39.195964  100380 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0217 11:57:39.196001  100380 start.go:495] detecting cgroup driver to use...
	I0217 11:57:39.196148  100380 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0217 11:57:39.216666  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0217 11:57:39.226815  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0217 11:57:39.236611  100380 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0217 11:57:39.236669  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0217 11:57:39.246500  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0217 11:57:39.256691  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0217 11:57:39.266509  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0217 11:57:39.276231  100380 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0217 11:57:39.286298  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0217 11:57:39.296149  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0217 11:57:39.305984  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0217 11:57:39.315650  100380 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0217 11:57:39.324721  100380 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0217 11:57:39.324777  100380 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0217 11:57:39.334429  100380 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0217 11:57:39.343052  100380 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0217 11:57:39.458041  100380 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0217 11:57:39.483361  100380 start.go:495] detecting cgroup driver to use...
	I0217 11:57:39.483453  100380 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0217 11:57:39.501404  100380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0217 11:57:39.522545  100380 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0217 11:57:39.545214  100380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0217 11:57:39.557462  100380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0217 11:57:39.569445  100380 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0217 11:57:39.593668  100380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0217 11:57:39.606767  100380 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0217 11:57:39.623713  100380 ssh_runner.go:195] Run: which cri-dockerd
	I0217 11:57:39.627306  100380 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0217 11:57:39.635920  100380 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0217 11:57:39.651184  100380 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0217 11:57:39.767938  100380 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0217 11:57:39.884761  100380 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0217 11:57:39.884806  100380 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0217 11:57:39.900934  100380 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0217 11:57:40.013206  100380 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0217 11:58:41.088581  100380 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.075335279s)
	I0217 11:58:41.088680  100380 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0217 11:58:41.109373  100380 out.go:201] 
	W0217 11:58:41.110918  100380 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Feb 17 11:57:37 ha-783738-m02 systemd[1]: Starting Docker Application Container Engine...
	Feb 17 11:57:37 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:37.207555071Z" level=info msg="Starting up"
	Feb 17 11:57:37 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:37.208523706Z" level=info msg="containerd not running, starting managed containerd"
	Feb 17 11:57:37 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:37.209284365Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=499
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.234357473Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.253922324Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.254071326Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.254155313Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.254195097Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.254502645Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.254572700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.254826671Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.254880442Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.254926515Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.254965881Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.255209553Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.255502921Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.257578132Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.257723954Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.257912930Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.257960933Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.258214223Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.258292090Z" level=info msg="metadata content store policy set" policy=shared
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.262281766Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.262389757Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.262437193Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.262478052Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.262523730Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.262614966Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.262915194Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263049035Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263094390Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263137669Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263176270Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263217488Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263254710Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263292496Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263339613Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263377065Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263418085Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263453223Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263511094Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263549833Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263589341Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263631649Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263726157Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263766086Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263809930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263847665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263885358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263932212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263972615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264020660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264063975Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264103157Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264158305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264194401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264230305Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264327104Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264417123Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264457690Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264499822Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264534568Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264575047Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264616722Z" level=info msg="NRI interface is disabled by configuration."
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264938960Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.265032087Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.265091203Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.265132167Z" level=info msg="containerd successfully booted in 0.032037s"
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.237803305Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.295143778Z" level=info msg="Loading containers: start."
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.484051173Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.565431513Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.632528889Z" level=info msg="Loading containers: done."
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.653906274Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.653941707Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.653962858Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.654196375Z" level=info msg="Daemon has completed initialization"
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.676178691Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.676315120Z" level=info msg="API listen on [::]:2376"
	Feb 17 11:57:38 ha-783738-m02 systemd[1]: Started Docker Application Container Engine.
	Feb 17 11:57:40 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:40.005718953Z" level=info msg="Processing signal 'terminated'"
	Feb 17 11:57:40 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:40.007186879Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Feb 17 11:57:40 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:40.007378782Z" level=info msg="Daemon shutdown complete"
	Feb 17 11:57:40 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:40.007446197Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Feb 17 11:57:40 ha-783738-m02 systemd[1]: Stopping Docker Application Container Engine...
	Feb 17 11:57:40 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:40.008214930Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Feb 17 11:57:41 ha-783738-m02 systemd[1]: docker.service: Deactivated successfully.
	Feb 17 11:57:41 ha-783738-m02 systemd[1]: Stopped Docker Application Container Engine.
	Feb 17 11:57:41 ha-783738-m02 systemd[1]: Starting Docker Application Container Engine...
	Feb 17 11:57:41 ha-783738-m02 dockerd[1120]: time="2025-02-17T11:57:41.051838490Z" level=info msg="Starting up"
	Feb 17 11:58:41 ha-783738-m02 dockerd[1120]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Feb 17 11:58:41 ha-783738-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Feb 17 11:58:41 ha-783738-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Feb 17 11:58:41 ha-783738-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0217 11:58:41.110964  100380 out.go:270] * 
	W0217 11:58:41.111815  100380 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0217 11:58:41.113412  100380 out.go:201] 
	
	
	==> Docker <==
	Feb 17 11:57:23 ha-783738 dockerd[1134]: time="2025-02-17T11:57:23.574956613Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 17 11:57:44 ha-783738 dockerd[1126]: time="2025-02-17T11:57:44.652472286Z" level=info msg="ignoring event" container=0eab009d1fe54d541fe5b166302e5af1a153e8aa37ad6a133704c1f40918f7c9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 17 11:57:44 ha-783738 dockerd[1134]: time="2025-02-17T11:57:44.653058320Z" level=info msg="shim disconnected" id=0eab009d1fe54d541fe5b166302e5af1a153e8aa37ad6a133704c1f40918f7c9 namespace=moby
	Feb 17 11:57:44 ha-783738 dockerd[1134]: time="2025-02-17T11:57:44.653483834Z" level=warning msg="cleaning up after shim disconnected" id=0eab009d1fe54d541fe5b166302e5af1a153e8aa37ad6a133704c1f40918f7c9 namespace=moby
	Feb 17 11:57:44 ha-783738 dockerd[1134]: time="2025-02-17T11:57:44.653545740Z" level=info msg="cleaning up dead shim" namespace=moby
	Feb 17 11:57:45 ha-783738 dockerd[1126]: time="2025-02-17T11:57:45.663576348Z" level=info msg="ignoring event" container=1683ded4f12ef91eea7067f33248f5185b17f0532a1c1480efe277bcd8accfe6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 17 11:57:45 ha-783738 dockerd[1134]: time="2025-02-17T11:57:45.664110377Z" level=info msg="shim disconnected" id=1683ded4f12ef91eea7067f33248f5185b17f0532a1c1480efe277bcd8accfe6 namespace=moby
	Feb 17 11:57:45 ha-783738 dockerd[1134]: time="2025-02-17T11:57:45.664165013Z" level=warning msg="cleaning up after shim disconnected" id=1683ded4f12ef91eea7067f33248f5185b17f0532a1c1480efe277bcd8accfe6 namespace=moby
	Feb 17 11:57:45 ha-783738 dockerd[1134]: time="2025-02-17T11:57:45.664175956Z" level=info msg="cleaning up dead shim" namespace=moby
	Feb 17 11:58:04 ha-783738 dockerd[1134]: time="2025-02-17T11:58:04.854960498Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 17 11:58:04 ha-783738 dockerd[1134]: time="2025-02-17T11:58:04.855123802Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 17 11:58:04 ha-783738 dockerd[1134]: time="2025-02-17T11:58:04.855151191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 17 11:58:04 ha-783738 dockerd[1134]: time="2025-02-17T11:58:04.855373177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 17 11:58:04 ha-783738 dockerd[1134]: time="2025-02-17T11:58:04.858152322Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 17 11:58:04 ha-783738 dockerd[1134]: time="2025-02-17T11:58:04.858222102Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 17 11:58:04 ha-783738 dockerd[1134]: time="2025-02-17T11:58:04.858232103Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 17 11:58:04 ha-783738 dockerd[1134]: time="2025-02-17T11:58:04.858372930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 17 11:58:25 ha-783738 dockerd[1126]: time="2025-02-17T11:58:25.325613613Z" level=info msg="ignoring event" container=0d8dd6abc6b0262f0e2de062685df6bbc87187dd14023d0fd12b894f48bd2001 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 17 11:58:25 ha-783738 dockerd[1134]: time="2025-02-17T11:58:25.326644755Z" level=info msg="shim disconnected" id=0d8dd6abc6b0262f0e2de062685df6bbc87187dd14023d0fd12b894f48bd2001 namespace=moby
	Feb 17 11:58:25 ha-783738 dockerd[1134]: time="2025-02-17T11:58:25.326737271Z" level=warning msg="cleaning up after shim disconnected" id=0d8dd6abc6b0262f0e2de062685df6bbc87187dd14023d0fd12b894f48bd2001 namespace=moby
	Feb 17 11:58:25 ha-783738 dockerd[1134]: time="2025-02-17T11:58:25.326756884Z" level=info msg="cleaning up dead shim" namespace=moby
	Feb 17 11:58:26 ha-783738 dockerd[1126]: time="2025-02-17T11:58:26.334899301Z" level=info msg="ignoring event" container=2e90f752fdc0601abb5401e228fa8355b97462cfd9f4dafb766f56eaf8e7b13a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 17 11:58:26 ha-783738 dockerd[1134]: time="2025-02-17T11:58:26.335703125Z" level=info msg="shim disconnected" id=2e90f752fdc0601abb5401e228fa8355b97462cfd9f4dafb766f56eaf8e7b13a namespace=moby
	Feb 17 11:58:26 ha-783738 dockerd[1134]: time="2025-02-17T11:58:26.335778773Z" level=warning msg="cleaning up after shim disconnected" id=2e90f752fdc0601abb5401e228fa8355b97462cfd9f4dafb766f56eaf8e7b13a namespace=moby
	Feb 17 11:58:26 ha-783738 dockerd[1134]: time="2025-02-17T11:58:26.335795547Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	2e90f752fdc06       019ee182b58e2       41 seconds ago       Exited              kube-controller-manager   4                   eeb1b6c34de35       kube-controller-manager-ha-783738
	0d8dd6abc6b02       95c0bda56fc4d       41 seconds ago       Exited              kube-apiserver            4                   a531c479908eb       kube-apiserver-ha-783738
	d524d25a3256e       2b0d6572d062c       About a minute ago   Running             kube-scheduler            2                   5633bc5aacc12       kube-scheduler-ha-783738
	2b8921c7d9f71       22f88dde2caa4       About a minute ago   Running             kube-vip                  1                   5f0329677cb70       kube-vip-ha-783738
	aeb757a6db075       a9e7e6b294baf       About a minute ago   Running             etcd                      2                   8c5c6a3fd0ba0       etcd-ha-783738
	8c236b02a8316       6e38f40d628db       4 minutes ago        Exited              storage-provisioner       3                   3b5478be91580       storage-provisioner
	f460be4118731       8c811b4aec35f       4 minutes ago        Exited              busybox                   1                   cd41205ee4990       busybox-58667487b6-mp8w2
	5caaef1da4142       e29f9c7391fd9       4 minutes ago        Exited              kube-proxy                1                   3bada7fe972b9       kube-proxy-pgwb4
	95f567924c5ee       c69fa2e9cbf5f       4 minutes ago        Exited              coredns                   1                   33c8d49183b1a       coredns-668d6bf9bc-bhrvt
	b4ccb469b39af       df3849d954c98       4 minutes ago        Exited              kindnet-cni               1                   bba5ce66a15dd       kindnet-t72ln
	b674f5b7afb38       c69fa2e9cbf5f       4 minutes ago        Exited              coredns                   1                   bfd8d387b7e96       coredns-668d6bf9bc-k5k72
	1395373a3c212       2b0d6572d062c       5 minutes ago        Exited              kube-scheduler            1                   fe3b7022472a7       kube-scheduler-ha-783738
	0644596c7e815       a9e7e6b294baf       5 minutes ago        Exited              etcd                      1                   a79f0d4414c0a       etcd-ha-783738
	905fe651f5a2d       22f88dde2caa4       5 minutes ago        Exited              kube-vip                  0                   6e727a24edb43       kube-vip-ha-783738
	
	
	==> coredns [95f567924c5e] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:54083 - 5538 "HINFO IN 6952713337195609451.67698316276633629. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.046526479s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[586752551]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Feb-2025 11:53:59.037) (total time: 30004ms):
	Trace[586752551]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30004ms (11:54:29.042)
	Trace[586752551]: [30.004932204s] [30.004932204s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[31748474]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Feb-2025 11:53:59.037) (total time: 30005ms):
	Trace[31748474]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30005ms (11:54:29.043)
	Trace[31748474]: [30.005260877s] [30.005260877s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1254162758]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Feb-2025 11:53:59.043) (total time: 30000ms):
	Trace[1254162758]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (11:54:29.044)
	Trace[1254162758]: [30.000938039s] [30.000938039s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b674f5b7afb3] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:47652 - 30454 "HINFO IN 3233588620932119307.6917908993167898246. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.026177844s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1310151553]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Feb-2025 11:53:59.042) (total time: 30001ms):
	Trace[1310151553]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (11:54:29.043)
	Trace[1310151553]: [30.001216976s] [30.001216976s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1951418715]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Feb-2025 11:53:59.039) (total time: 30005ms):
	Trace[1951418715]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30005ms (11:54:29.044)
	Trace[1951418715]: [30.005382964s] [30.005382964s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[606941673]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Feb-2025 11:53:59.038) (total time: 30006ms):
	Trace[606941673]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30006ms (11:54:29.044)
	Trace[606941673]: [30.006431575s] [30.006431575s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0217 11:58:45.372891    3072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0217 11:58:45.374542    3072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0217 11:58:45.376150    3072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0217 11:58:45.377517    3072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0217 11:58:45.378814    3072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Feb17 11:56] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052638] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037697] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.851026] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.992141] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[Feb17 11:57] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.664405] systemd-fstab-generator[476]: Ignoring "noauto" option for root device
	[  +0.058988] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058916] systemd-fstab-generator[488]: Ignoring "noauto" option for root device
	[  +2.348725] systemd-fstab-generator[1055]: Ignoring "noauto" option for root device
	[  +0.313948] systemd-fstab-generator[1092]: Ignoring "noauto" option for root device
	[  +0.110900] systemd-fstab-generator[1104]: Ignoring "noauto" option for root device
	[  +0.140552] systemd-fstab-generator[1118]: Ignoring "noauto" option for root device
	[  +2.263360] kauditd_printk_skb: 199 callbacks suppressed
	[  +0.301992] systemd-fstab-generator[1377]: Ignoring "noauto" option for root device
	[  +0.125509] systemd-fstab-generator[1390]: Ignoring "noauto" option for root device
	[  +0.118202] systemd-fstab-generator[1402]: Ignoring "noauto" option for root device
	[  +0.144218] systemd-fstab-generator[1417]: Ignoring "noauto" option for root device
	[  +0.508597] systemd-fstab-generator[1584]: Ignoring "noauto" option for root device
	[  +6.843964] kauditd_printk_skb: 180 callbacks suppressed
	[  +8.294455] kauditd_printk_skb: 40 callbacks suppressed
	
	
	==> etcd [0644596c7e81] <==
	{"level":"warn","ts":"2025-02-17T11:56:37.953386Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"416.799075ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/\" range_end:\"/registry/deployments0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2025-02-17T11:56:37.953402Z","caller":"traceutil/trace.go:171","msg":"trace[234534568] range","detail":"{range_begin:/registry/deployments/; range_end:/registry/deployments0; }","duration":"416.832899ms","start":"2025-02-17T11:56:37.536564Z","end":"2025-02-17T11:56:37.953396Z","steps":["trace[234534568] 'agreement among raft nodes before linearized reading'  (duration: 416.815476ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-17T11:56:37.953416Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-17T11:56:37.536510Z","time spent":"416.902435ms","remote":"127.0.0.1:58532","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":0,"response size":0,"request content":"key:\"/registry/deployments/\" range_end:\"/registry/deployments0\" count_only:true "}
	2025/02/17 11:56:37 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2025-02-17T11:56:37.953469Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.057072714s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"","error":"context canceled"}
	{"level":"info","ts":"2025-02-17T11:56:37.953479Z","caller":"traceutil/trace.go:171","msg":"trace[2020420396] range","detail":"{range_begin:/registry/health; range_end:; }","duration":"1.057490424s","start":"2025-02-17T11:56:36.895986Z","end":"2025-02-17T11:56:37.953476Z","steps":["trace[2020420396] 'agreement among raft nodes before linearized reading'  (duration: 1.057479846s)"],"step_count":1}
	{"level":"warn","ts":"2025-02-17T11:56:37.953491Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-17T11:56:36.895975Z","time spent":"1.057513489s","remote":"127.0.0.1:58120","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":0,"request content":"key:\"/registry/health\" "}
	2025/02/17 11:56:37 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2025-02-17T11:56:37.953557Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.889027766s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/podtemplates/\" range_end:\"/registry/podtemplates0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2025-02-17T11:56:37.953567Z","caller":"traceutil/trace.go:171","msg":"trace[159538693] range","detail":"{range_begin:/registry/podtemplates/; range_end:/registry/podtemplates0; }","duration":"1.889056203s","start":"2025-02-17T11:56:36.064508Z","end":"2025-02-17T11:56:37.953564Z","steps":["trace[159538693] 'agreement among raft nodes before linearized reading'  (duration: 1.88904446s)"],"step_count":1}
	{"level":"warn","ts":"2025-02-17T11:56:37.953580Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-17T11:56:36.064496Z","time spent":"1.889079683s","remote":"127.0.0.1:58254","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":0,"response size":0,"request content":"key:\"/registry/podtemplates/\" range_end:\"/registry/podtemplates0\" count_only:true "}
	2025/02/17 11:56:37 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2025-02-17T11:56:38.012328Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.249:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-02-17T11:56:38.012367Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.249:2379: use of closed network connection"}
	{"level":"info","ts":"2025-02-17T11:56:38.012413Z","caller":"etcdserver/server.go:1534","msg":"skipped leadership transfer; local server is not leader","local-member-id":"318ee90c3446d547","current-leader-member-id":"0"}
	{"level":"info","ts":"2025-02-17T11:56:38.012793Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"645ac05e9f2d470a"}
	{"level":"info","ts":"2025-02-17T11:56:38.012892Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"645ac05e9f2d470a"}
	{"level":"info","ts":"2025-02-17T11:56:38.012915Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"645ac05e9f2d470a"}
	{"level":"info","ts":"2025-02-17T11:56:38.012991Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"318ee90c3446d547","remote-peer-id":"645ac05e9f2d470a"}
	{"level":"info","ts":"2025-02-17T11:56:38.013022Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"318ee90c3446d547","remote-peer-id":"645ac05e9f2d470a"}
	{"level":"info","ts":"2025-02-17T11:56:38.013134Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"318ee90c3446d547","remote-peer-id":"645ac05e9f2d470a"}
	{"level":"info","ts":"2025-02-17T11:56:38.013145Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"645ac05e9f2d470a"}
	{"level":"info","ts":"2025-02-17T11:56:38.016636Z","caller":"embed/etcd.go:582","msg":"stopping serving peer traffic","address":"192.168.39.249:2380"}
	{"level":"info","ts":"2025-02-17T11:56:38.016720Z","caller":"embed/etcd.go:587","msg":"stopped serving peer traffic","address":"192.168.39.249:2380"}
	{"level":"info","ts":"2025-02-17T11:56:38.016728Z","caller":"embed/etcd.go:380","msg":"closed etcd server","name":"ha-783738","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.249:2380"],"advertise-client-urls":["https://192.168.39.249:2379"]}
	
	
	==> etcd [aeb757a6db07] <==
	{"level":"warn","ts":"2025-02-17T11:58:39.334913Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15368416165570069265,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-02-17T11:58:39.836002Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15368416165570069265,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-02-17T11:58:40.336905Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15368416165570069265,"retry-timeout":"500ms"}
	{"level":"info","ts":"2025-02-17T11:58:40.836559Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 is starting a new election at term 3"}
	{"level":"info","ts":"2025-02-17T11:58:40.836692Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 became pre-candidate at term 3"}
	{"level":"info","ts":"2025-02-17T11:58:40.836729Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 received MsgPreVoteResp from 318ee90c3446d547 at term 3"}
	{"level":"info","ts":"2025-02-17T11:58:40.836762Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 [logterm: 3, index: 3030] sent MsgPreVote request to 645ac05e9f2d470a at term 3"}
	{"level":"warn","ts":"2025-02-17T11:58:40.837045Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15368416165570069265,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-02-17T11:58:41.084143Z","caller":"etcdserver/server.go:2161","msg":"failed to publish local member to cluster through raft","local-member-id":"318ee90c3446d547","local-member-attributes":"{Name:ha-783738 ClientURLs:[https://192.168.39.249:2379]}","request-path":"/0/members/318ee90c3446d547/attributes","publish-timeout":"7s","error":"etcdserver: request timed out"}
	{"level":"warn","ts":"2025-02-17T11:58:41.337434Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15368416165570069265,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-02-17T11:58:41.827365Z","caller":"etcdserver/v3_server.go:932","msg":"timed out waiting for read index response (local node might have slow network)","timeout":"7s"}
	{"level":"warn","ts":"2025-02-17T11:58:41.827445Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"7.000504247s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: request timed out"}
	{"level":"info","ts":"2025-02-17T11:58:41.827469Z","caller":"traceutil/trace.go:171","msg":"trace[1958910963] range","detail":"{range_begin:; range_end:; }","duration":"7.000551306s","start":"2025-02-17T11:58:34.826907Z","end":"2025-02-17T11:58:41.827459Z","steps":["trace[1958910963] 'agreement among raft nodes before linearized reading'  (duration: 7.000502454s)"],"step_count":1}
	{"level":"error","ts":"2025-02-17T11:58:41.827501Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: request timed out\n[+]data_corruption ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2171\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2688\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:3142\nnet/http.(*conn).serve\n\tnet/http/server.go:2044"}
	{"level":"info","ts":"2025-02-17T11:58:42.436651Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 is starting a new election at term 3"}
	{"level":"info","ts":"2025-02-17T11:58:42.436750Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 became pre-candidate at term 3"}
	{"level":"info","ts":"2025-02-17T11:58:42.436772Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 received MsgPreVoteResp from 318ee90c3446d547 at term 3"}
	{"level":"info","ts":"2025-02-17T11:58:42.436803Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 [logterm: 3, index: 3030] sent MsgPreVote request to 645ac05e9f2d470a at term 3"}
	{"level":"info","ts":"2025-02-17T11:58:44.036156Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 is starting a new election at term 3"}
	{"level":"info","ts":"2025-02-17T11:58:44.036195Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 became pre-candidate at term 3"}
	{"level":"info","ts":"2025-02-17T11:58:44.036247Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 received MsgPreVoteResp from 318ee90c3446d547 at term 3"}
	{"level":"info","ts":"2025-02-17T11:58:44.036264Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 [logterm: 3, index: 3030] sent MsgPreVote request to 645ac05e9f2d470a at term 3"}
	{"level":"warn","ts":"2025-02-17T11:58:44.107198Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"645ac05e9f2d470a","rtt":"0s","error":"dial tcp 192.168.39.31:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-02-17T11:58:44.107261Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"645ac05e9f2d470a","rtt":"0s","error":"dial tcp 192.168.39.31:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-02-17T11:58:45.328421Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15368416165570069268,"retry-timeout":"500ms"}
	
	
	==> kernel <==
	 11:58:45 up 1 min,  0 users,  load average: 0.52, 0.30, 0.11
	Linux ha-783738 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [b4ccb469b39a] <==
	I0217 11:56:00.000922       1 main.go:324] Node ha-783738-m02 has CIDR [10.244.1.0/24] 
	I0217 11:56:00.001386       1 main.go:297] Handling node with IPs: map[192.168.39.216:{}]
	I0217 11:56:00.001417       1 main.go:324] Node ha-783738-m03 has CIDR [10.244.2.0/24] 
	I0217 11:56:00.002870       1 main.go:297] Handling node with IPs: map[192.168.39.168:{}]
	I0217 11:56:00.003089       1 main.go:324] Node ha-783738-m04 has CIDR [10.244.3.0/24] 
	I0217 11:56:10.003758       1 main.go:297] Handling node with IPs: map[192.168.39.31:{}]
	I0217 11:56:10.004120       1 main.go:324] Node ha-783738-m02 has CIDR [10.244.1.0/24] 
	I0217 11:56:10.004466       1 main.go:297] Handling node with IPs: map[192.168.39.216:{}]
	I0217 11:56:10.004579       1 main.go:324] Node ha-783738-m03 has CIDR [10.244.2.0/24] 
	I0217 11:56:10.004848       1 main.go:297] Handling node with IPs: map[192.168.39.168:{}]
	I0217 11:56:10.004993       1 main.go:324] Node ha-783738-m04 has CIDR [10.244.3.0/24] 
	I0217 11:56:10.005322       1 main.go:297] Handling node with IPs: map[192.168.39.249:{}]
	I0217 11:56:10.005440       1 main.go:301] handling current node
	I0217 11:56:20.008868       1 main.go:297] Handling node with IPs: map[192.168.39.249:{}]
	I0217 11:56:20.008992       1 main.go:301] handling current node
	I0217 11:56:20.009032       1 main.go:297] Handling node with IPs: map[192.168.39.31:{}]
	I0217 11:56:20.009107       1 main.go:324] Node ha-783738-m02 has CIDR [10.244.1.0/24] 
	I0217 11:56:20.009351       1 main.go:297] Handling node with IPs: map[192.168.39.168:{}]
	I0217 11:56:20.009426       1 main.go:324] Node ha-783738-m04 has CIDR [10.244.3.0/24] 
	I0217 11:56:30.000205       1 main.go:297] Handling node with IPs: map[192.168.39.168:{}]
	I0217 11:56:30.000320       1 main.go:324] Node ha-783738-m04 has CIDR [10.244.3.0/24] 
	I0217 11:56:30.000673       1 main.go:297] Handling node with IPs: map[192.168.39.249:{}]
	I0217 11:56:30.004120       1 main.go:301] handling current node
	I0217 11:56:30.004403       1 main.go:297] Handling node with IPs: map[192.168.39.31:{}]
	I0217 11:56:30.004484       1 main.go:324] Node ha-783738-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [0d8dd6abc6b0] <==
	W0217 11:58:05.008746       1 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags
	I0217 11:58:05.009254       1 options.go:238] external host was not specified, using 192.168.39.249
	I0217 11:58:05.012100       1 server.go:143] Version: v1.32.1
	I0217 11:58:05.012139       1 server.go:145] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0217 11:58:05.254592       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0217 11:58:05.265931       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0217 11:58:05.302917       1 plugins.go:157] Loaded 13 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I0217 11:58:05.302958       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0217 11:58:05.303380       1 instance.go:233] Using reconciler: lease
	W0217 11:58:25.253372       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0217 11:58:25.253478       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0217 11:58:25.304453       1 instance.go:226] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [2e90f752fdc0] <==
	I0217 11:58:05.575513       1 serving.go:386] Generated self-signed cert in-memory
	I0217 11:58:05.850219       1 controllermanager.go:185] "Starting" version="v1.32.1"
	I0217 11:58:05.850380       1 controllermanager.go:187] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0217 11:58:05.851835       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0217 11:58:05.852508       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0217 11:58:05.852713       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0217 11:58:05.852833       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0217 11:58:26.312388       1 controllermanager.go:230] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.249:8443/healthz\": dial tcp 192.168.39.249:8443: connect: connection refused"
	
	
	==> kube-proxy [5caaef1da414] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0217 11:53:59.616708       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0217 11:53:59.651486       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.249"]
	E0217 11:53:59.651650       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0217 11:53:59.696326       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0217 11:53:59.696377       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0217 11:53:59.696401       1 server_linux.go:170] "Using iptables Proxier"
	I0217 11:53:59.710221       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0217 11:53:59.711347       1 server.go:497] "Version info" version="v1.32.1"
	I0217 11:53:59.711380       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0217 11:53:59.716398       1 config.go:199] "Starting service config controller"
	I0217 11:53:59.717714       1 config.go:105] "Starting endpoint slice config controller"
	I0217 11:53:59.717746       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0217 11:53:59.718142       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0217 11:53:59.718615       1 config.go:329] "Starting node config controller"
	I0217 11:53:59.718758       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0217 11:53:59.817915       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0217 11:53:59.819456       1 shared_informer.go:320] Caches are synced for service config
	I0217 11:53:59.821373       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1395373a3c21] <==
	E0217 11:53:52.919534       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.249:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0217 11:53:53.771964       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.249:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0217 11:53:53.772105       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.249:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0217 11:53:55.316775       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.249:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0217 11:53:55.316841       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.249:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0217 11:53:55.317229       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.249:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0217 11:53:55.317287       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.249:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0217 11:53:55.599247       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.249:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0217 11:53:55.599332       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.249:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0217 11:53:55.855471       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.249:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0217 11:53:55.855524       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.249:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0217 11:53:56.059180       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.249:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0217 11:53:56.059238       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.249:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0217 11:53:59.073926       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0217 11:53:59.074031       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0217 11:53:59.074570       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0217 11:53:59.075126       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0217 11:53:59.075450       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0217 11:53:59.074624       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0217 11:54:13.896773       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0217 11:56:05.957670       1 framework.go:1316] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-58667487b6-v7x5t\": pod busybox-58667487b6-v7x5t is already assigned to node \"ha-783738-m04\"" plugin="DefaultBinder" pod="default/busybox-58667487b6-v7x5t" node="ha-783738-m04"
	E0217 11:56:05.971236       1 schedule_one.go:359] "scheduler cache ForgetPod failed" err="pod c5148a30-9b13-42ed-87c8-723413b074d3(default/busybox-58667487b6-v7x5t) wasn't assumed so cannot be forgotten" pod="default/busybox-58667487b6-v7x5t"
	E0217 11:56:05.971303       1 schedule_one.go:1058] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-58667487b6-v7x5t\": pod busybox-58667487b6-v7x5t is already assigned to node \"ha-783738-m04\"" pod="default/busybox-58667487b6-v7x5t"
	I0217 11:56:05.971509       1 schedule_one.go:1071] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-58667487b6-v7x5t" node="ha-783738-m04"
	E0217 11:56:37.999387       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [d524d25a3256] <==
	E0217 11:58:26.313559       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.249:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.249:37922->192.168.39.249:8443: read: connection reset by peer" logger="UnhandledError"
	W0217 11:58:26.313700       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.249:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.249:37926->192.168.39.249:8443: read: connection reset by peer
	E0217 11:58:26.313773       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.249:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.249:37926->192.168.39.249:8443: read: connection reset by peer" logger="UnhandledError"
	W0217 11:58:26.313906       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.249:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.249:37956->192.168.39.249:8443: read: connection reset by peer
	E0217 11:58:26.313971       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.249:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.249:37956->192.168.39.249:8443: read: connection reset by peer" logger="UnhandledError"
	W0217 11:58:26.314101       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.249:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.249:37960->192.168.39.249:8443: read: connection reset by peer
	E0217 11:58:26.314185       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.249:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.249:37960->192.168.39.249:8443: read: connection reset by peer" logger="UnhandledError"
	W0217 11:58:26.314462       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.249:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.249:37888->192.168.39.249:8443: read: connection reset by peer
	E0217 11:58:26.314547       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.249:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.249:37888->192.168.39.249:8443: read: connection reset by peer" logger="UnhandledError"
	W0217 11:58:26.314713       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.249:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.249:37930->192.168.39.249:8443: read: connection reset by peer
	E0217 11:58:26.314798       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.249:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.249:37930->192.168.39.249:8443: read: connection reset by peer" logger="UnhandledError"
	W0217 11:58:26.314960       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.249:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.249:37948->192.168.39.249:8443: read: connection reset by peer
	E0217 11:58:26.315166       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.249:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.249:37948->192.168.39.249:8443: read: connection reset by peer" logger="UnhandledError"
	W0217 11:58:26.315243       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: Get "https://192.168.39.249:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.249:37940->192.168.39.249:8443: read: connection reset by peer
	E0217 11:58:26.315352       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.249:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.249:37940->192.168.39.249:8443: read: connection reset by peer" logger="UnhandledError"
	W0217 11:58:29.432094       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.249:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0217 11:58:29.432235       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.249:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0217 11:58:32.758441       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.249:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0217 11:58:32.758583       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.249:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0217 11:58:33.069242       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.249:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0217 11:58:33.069380       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.249:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0217 11:58:35.727701       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.249:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0217 11:58:35.727922       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.249:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0217 11:58:36.974377       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.249:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0217 11:58:36.974419       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.249:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	
	
	==> kubelet <==
	Feb 17 11:58:27 ha-783738 kubelet[1591]: E0217 11:58:27.238622    1591 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host" event="&Event{ObjectMeta:{ha-783738.1824fce9ab5e06e9  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-783738,UID:ha-783738,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ha-783738,},FirstTimestamp:2025-02-17 11:57:16.604499689 +0000 UTC m=+0.220042798,LastTimestamp:2025-02-17 11:57:16.604499689 +0000 UTC m=+0.220042798,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-783738,}"
	Feb 17 11:58:30 ha-783738 kubelet[1591]: E0217 11:58:30.957653    1591 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-783738\" not found" node="ha-783738"
	Feb 17 11:58:30 ha-783738 kubelet[1591]: I0217 11:58:30.957784    1591 scope.go:117] "RemoveContainer" containerID="0d8dd6abc6b0262f0e2de062685df6bbc87187dd14023d0fd12b894f48bd2001"
	Feb 17 11:58:30 ha-783738 kubelet[1591]: E0217 11:58:30.957928    1591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-783738_kube-system(77f0e47471ffa89381403ccfd101e5e7)\"" pod="kube-system/kube-apiserver-ha-783738" podUID="77f0e47471ffa89381403ccfd101e5e7"
	Feb 17 11:58:31 ha-783738 kubelet[1591]: I0217 11:58:31.169391    1591 kubelet_node_status.go:76] "Attempting to register node" node="ha-783738"
	Feb 17 11:58:32 ha-783738 kubelet[1591]: E0217 11:58:32.182236    1591 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-783738\" not found" node="ha-783738"
	Feb 17 11:58:32 ha-783738 kubelet[1591]: I0217 11:58:32.182362    1591 scope.go:117] "RemoveContainer" containerID="2e90f752fdc0601abb5401e228fa8355b97462cfd9f4dafb766f56eaf8e7b13a"
	Feb 17 11:58:32 ha-783738 kubelet[1591]: E0217 11:58:32.182489    1591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-783738_kube-system(37cb2af166ca362ca24afd5a80241d47)\"" pod="kube-system/kube-controller-manager-ha-783738" podUID="37cb2af166ca362ca24afd5a80241d47"
	Feb 17 11:58:33 ha-783738 kubelet[1591]: E0217 11:58:33.382650    1591 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.254:8443: connect: no route to host" node="ha-783738"
	Feb 17 11:58:33 ha-783738 kubelet[1591]: E0217 11:58:33.382815    1591 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-783738?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host" interval="7s"
	Feb 17 11:58:33 ha-783738 kubelet[1591]: W0217 11:58:33.382655    1591 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-783738&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	Feb 17 11:58:33 ha-783738 kubelet[1591]: E0217 11:58:33.383127    1591 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-783738&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	Feb 17 11:58:36 ha-783738 kubelet[1591]: E0217 11:58:36.704343    1591 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-783738\" not found"
	Feb 17 11:58:37 ha-783738 kubelet[1591]: E0217 11:58:37.748003    1591 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-783738\" not found" node="ha-783738"
	Feb 17 11:58:39 ha-783738 kubelet[1591]: E0217 11:58:39.526616    1591 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host" event="&Event{ObjectMeta:{ha-783738.1824fce9ab5e06e9  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-783738,UID:ha-783738,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ha-783738,},FirstTimestamp:2025-02-17 11:57:16.604499689 +0000 UTC m=+0.220042798,LastTimestamp:2025-02-17 11:57:16.604499689 +0000 UTC m=+0.220042798,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-783738,}"
	Feb 17 11:58:39 ha-783738 kubelet[1591]: E0217 11:58:39.748034    1591 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-783738\" not found" node="ha-783738"
	Feb 17 11:58:40 ha-783738 kubelet[1591]: I0217 11:58:40.384759    1591 kubelet_node_status.go:76] "Attempting to register node" node="ha-783738"
	Feb 17 11:58:42 ha-783738 kubelet[1591]: E0217 11:58:42.599676    1591 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.254:8443: connect: no route to host" node="ha-783738"
	Feb 17 11:58:42 ha-783738 kubelet[1591]: E0217 11:58:42.599851    1591 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-783738?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host" interval="7s"
	Feb 17 11:58:43 ha-783738 kubelet[1591]: E0217 11:58:43.747946    1591 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-783738\" not found" node="ha-783738"
	Feb 17 11:58:43 ha-783738 kubelet[1591]: I0217 11:58:43.748020    1591 scope.go:117] "RemoveContainer" containerID="0d8dd6abc6b0262f0e2de062685df6bbc87187dd14023d0fd12b894f48bd2001"
	Feb 17 11:58:43 ha-783738 kubelet[1591]: E0217 11:58:43.748145    1591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-783738_kube-system(77f0e47471ffa89381403ccfd101e5e7)\"" pod="kube-system/kube-apiserver-ha-783738" podUID="77f0e47471ffa89381403ccfd101e5e7"
	Feb 17 11:58:44 ha-783738 kubelet[1591]: E0217 11:58:44.748575    1591 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-783738\" not found" node="ha-783738"
	Feb 17 11:58:44 ha-783738 kubelet[1591]: I0217 11:58:44.749252    1591 scope.go:117] "RemoveContainer" containerID="2e90f752fdc0601abb5401e228fa8355b97462cfd9f4dafb766f56eaf8e7b13a"
	Feb 17 11:58:44 ha-783738 kubelet[1591]: E0217 11:58:44.750099    1591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-783738_kube-system(37cb2af166ca362ca24afd5a80241d47)\"" pod="kube-system/kube-controller-manager-ha-783738" podUID="37cb2af166ca362ca24afd5a80241d47"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-783738 -n ha-783738
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-783738 -n ha-783738: exit status 2 (228.316969ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-783738" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (1.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:305: expected profile "ha-783738" in json of 'profile list' to include 4 nodes but have 3 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-783738\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-783738\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\
"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.32.1\",\"ClusterName\":\"ha-783738\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.249\",\"Port\":8443,\"KubernetesVersion\":\"v1.32.1\",\"ContainerRunt
ime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.31\",\"Port\":8443,\"KubernetesVersion\":\"v1.32.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.168\",\"Port\":0,\"KubernetesVersion\":\"v1.32.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-
security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":
false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
ha_test.go:309: expected profile "ha-783738" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-783738\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-783738\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"Dock
erOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.32.1\",\"ClusterName\":\"ha-783738\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.249\",\"Port\":8443,\"KubernetesVersion\":\"v1.32.1\",\
"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.31\",\"Port\":8443,\"KubernetesVersion\":\"v1.32.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.168\",\"Port\":0,\"KubernetesVersion\":\"v1.32.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\
":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"Dis
ableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-783738 -n ha-783738
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-783738 -n ha-783738: exit status 2 (223.622688ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 logs -n 25
helpers_test.go:252: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-783738 ssh -n                                                                 | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:50 UTC | 17 Feb 25 11:50 UTC |
	|         | ha-783738-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-783738 ssh -n ha-783738-m04 sudo cat                                          | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:50 UTC | 17 Feb 25 11:50 UTC |
	|         | /home/docker/cp-test_ha-783738-m03_ha-783738-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-783738 cp testdata/cp-test.txt                                                | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:50 UTC | 17 Feb 25 11:50 UTC |
	|         | ha-783738-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-783738 ssh -n                                                                 | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:50 UTC | 17 Feb 25 11:50 UTC |
	|         | ha-783738-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-783738 cp ha-783738-m04:/home/docker/cp-test.txt                              | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:50 UTC | 17 Feb 25 11:51 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3703533036/001/cp-test_ha-783738-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-783738 ssh -n                                                                 | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:51 UTC | 17 Feb 25 11:51 UTC |
	|         | ha-783738-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-783738 cp ha-783738-m04:/home/docker/cp-test.txt                              | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:51 UTC | 17 Feb 25 11:51 UTC |
	|         | ha-783738:/home/docker/cp-test_ha-783738-m04_ha-783738.txt                       |           |         |         |                     |                     |
	| ssh     | ha-783738 ssh -n                                                                 | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:51 UTC | 17 Feb 25 11:51 UTC |
	|         | ha-783738-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-783738 ssh -n ha-783738 sudo cat                                              | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:51 UTC | 17 Feb 25 11:51 UTC |
	|         | /home/docker/cp-test_ha-783738-m04_ha-783738.txt                                 |           |         |         |                     |                     |
	| cp      | ha-783738 cp ha-783738-m04:/home/docker/cp-test.txt                              | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:51 UTC | 17 Feb 25 11:51 UTC |
	|         | ha-783738-m02:/home/docker/cp-test_ha-783738-m04_ha-783738-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-783738 ssh -n                                                                 | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:51 UTC | 17 Feb 25 11:51 UTC |
	|         | ha-783738-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-783738 ssh -n ha-783738-m02 sudo cat                                          | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:51 UTC | 17 Feb 25 11:51 UTC |
	|         | /home/docker/cp-test_ha-783738-m04_ha-783738-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-783738 cp ha-783738-m04:/home/docker/cp-test.txt                              | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:51 UTC | 17 Feb 25 11:51 UTC |
	|         | ha-783738-m03:/home/docker/cp-test_ha-783738-m04_ha-783738-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-783738 ssh -n                                                                 | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:51 UTC | 17 Feb 25 11:51 UTC |
	|         | ha-783738-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-783738 ssh -n ha-783738-m03 sudo cat                                          | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:51 UTC | 17 Feb 25 11:51 UTC |
	|         | /home/docker/cp-test_ha-783738-m04_ha-783738-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-783738 node stop m02 -v=7                                                     | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:51 UTC | 17 Feb 25 11:51 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-783738 node start m02 -v=7                                                    | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:51 UTC | 17 Feb 25 11:51 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-783738 -v=7                                                           | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:52 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-783738 -v=7                                                                | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:52 UTC | 17 Feb 25 11:52 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-783738 --wait=true -v=7                                                    | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:52 UTC | 17 Feb 25 11:56 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-783738                                                                | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:56 UTC |                     |
	| node    | ha-783738 node delete m03 -v=7                                                   | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:56 UTC | 17 Feb 25 11:56 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-783738 stop -v=7                                                              | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:56 UTC | 17 Feb 25 11:56 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-783738 --wait=true                                                         | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:56 UTC |                     |
	|         | -v=7 --alsologtostderr                                                           |           |         |         |                     |                     |
	|         | --driver=kvm2                                                                    |           |         |         |                     |                     |
	| node    | add -p ha-783738                                                                 | ha-783738 | jenkins | v1.35.0 | 17 Feb 25 11:58 UTC |                     |
	|         | --control-plane -v=7                                                             |           |         |         |                     |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/17 11:56:50
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0217 11:56:50.215291  100380 out.go:345] Setting OutFile to fd 1 ...
	I0217 11:56:50.215609  100380 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 11:56:50.215619  100380 out.go:358] Setting ErrFile to fd 2...
	I0217 11:56:50.215624  100380 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 11:56:50.215819  100380 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20427-77349/.minikube/bin
	I0217 11:56:50.216353  100380 out.go:352] Setting JSON to false
	I0217 11:56:50.217237  100380 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":5958,"bootTime":1739787452,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0217 11:56:50.217362  100380 start.go:139] virtualization: kvm guest
	I0217 11:56:50.219910  100380 out.go:177] * [ha-783738] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0217 11:56:50.221323  100380 out.go:177]   - MINIKUBE_LOCATION=20427
	I0217 11:56:50.221334  100380 notify.go:220] Checking for updates...
	I0217 11:56:50.223835  100380 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0217 11:56:50.224954  100380 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20427-77349/kubeconfig
	I0217 11:56:50.226180  100380 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20427-77349/.minikube
	I0217 11:56:50.227361  100380 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0217 11:56:50.228473  100380 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0217 11:56:50.229885  100380 config.go:182] Loaded profile config "ha-783738": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0217 11:56:50.230261  100380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0217 11:56:50.230308  100380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0217 11:56:50.245239  100380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46091
	I0217 11:56:50.245761  100380 main.go:141] libmachine: () Calling .GetVersion
	I0217 11:56:50.246359  100380 main.go:141] libmachine: Using API Version  1
	I0217 11:56:50.246382  100380 main.go:141] libmachine: () Calling .SetConfigRaw
	I0217 11:56:50.246775  100380 main.go:141] libmachine: () Calling .GetMachineName
	I0217 11:56:50.246962  100380 main.go:141] libmachine: (ha-783738) Calling .DriverName
	I0217 11:56:50.247230  100380 driver.go:394] Setting default libvirt URI to qemu:///system
	I0217 11:56:50.247538  100380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0217 11:56:50.247594  100380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0217 11:56:50.262713  100380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36011
	I0217 11:56:50.263097  100380 main.go:141] libmachine: () Calling .GetVersion
	I0217 11:56:50.263692  100380 main.go:141] libmachine: Using API Version  1
	I0217 11:56:50.263752  100380 main.go:141] libmachine: () Calling .SetConfigRaw
	I0217 11:56:50.264059  100380 main.go:141] libmachine: () Calling .GetMachineName
	I0217 11:56:50.264289  100380 main.go:141] libmachine: (ha-783738) Calling .DriverName
	I0217 11:56:50.297981  100380 out.go:177] * Using the kvm2 driver based on existing profile
	I0217 11:56:50.299143  100380 start.go:297] selected driver: kvm2
	I0217 11:56:50.299155  100380 start.go:901] validating driver "kvm2" against &{Name:ha-783738 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-78
3738 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.31 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.168 Port:0 KubernetesVersion:v1.32.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0217 11:56:50.299304  100380 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0217 11:56:50.299646  100380 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0217 11:56:50.299706  100380 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20427-77349/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0217 11:56:50.314229  100380 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0217 11:56:50.314917  100380 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0217 11:56:50.314949  100380 cni.go:84] Creating CNI manager for ""
	I0217 11:56:50.315000  100380 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0217 11:56:50.315060  100380 start.go:340] cluster config:
	{Name:ha-783738 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-783738 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.31 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.168 Port:0 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kub
eflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0217 11:56:50.315190  100380 iso.go:125] acquiring lock: {Name:mk4380b7bda8fcd8bced9705ff1695c3fb7dac0d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0217 11:56:50.317519  100380 out.go:177] * Starting "ha-783738" primary control-plane node in "ha-783738" cluster
	I0217 11:56:50.318547  100380 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0217 11:56:50.318578  100380 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20427-77349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4
	I0217 11:56:50.318588  100380 cache.go:56] Caching tarball of preloaded images
	I0217 11:56:50.318681  100380 preload.go:172] Found /home/jenkins/minikube-integration/20427-77349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0217 11:56:50.318695  100380 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on docker
	I0217 11:56:50.318829  100380 profile.go:143] Saving config to /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/config.json ...
	I0217 11:56:50.319009  100380 start.go:360] acquireMachinesLock for ha-783738: {Name:mk05ba8323ae77ab7dcc14c378d65810d956fdc0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0217 11:56:50.319055  100380 start.go:364] duration metric: took 23.519µs to acquireMachinesLock for "ha-783738"
	I0217 11:56:50.319080  100380 start.go:96] Skipping create...Using existing machine configuration
	I0217 11:56:50.319088  100380 fix.go:54] fixHost starting: 
	I0217 11:56:50.319353  100380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0217 11:56:50.319391  100380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0217 11:56:50.333761  100380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34803
	I0217 11:56:50.334152  100380 main.go:141] libmachine: () Calling .GetVersion
	I0217 11:56:50.334693  100380 main.go:141] libmachine: Using API Version  1
	I0217 11:56:50.334714  100380 main.go:141] libmachine: () Calling .SetConfigRaw
	I0217 11:56:50.335000  100380 main.go:141] libmachine: () Calling .GetMachineName
	I0217 11:56:50.335210  100380 main.go:141] libmachine: (ha-783738) Calling .DriverName
	I0217 11:56:50.335347  100380 main.go:141] libmachine: (ha-783738) Calling .GetState
	I0217 11:56:50.336730  100380 fix.go:112] recreateIfNeeded on ha-783738: state=Stopped err=<nil>
	I0217 11:56:50.336752  100380 main.go:141] libmachine: (ha-783738) Calling .DriverName
	W0217 11:56:50.336864  100380 fix.go:138] unexpected machine state, will restart: <nil>
	I0217 11:56:50.338814  100380 out.go:177] * Restarting existing kvm2 VM for "ha-783738" ...
	I0217 11:56:50.340020  100380 main.go:141] libmachine: (ha-783738) Calling .Start
	I0217 11:56:50.340200  100380 main.go:141] libmachine: (ha-783738) starting domain...
	I0217 11:56:50.340221  100380 main.go:141] libmachine: (ha-783738) ensuring networks are active...
	I0217 11:56:50.340845  100380 main.go:141] libmachine: (ha-783738) Ensuring network default is active
	I0217 11:56:50.341268  100380 main.go:141] libmachine: (ha-783738) Ensuring network mk-ha-783738 is active
	I0217 11:56:50.341612  100380 main.go:141] libmachine: (ha-783738) getting domain XML...
	I0217 11:56:50.342286  100380 main.go:141] libmachine: (ha-783738) creating domain...
	I0217 11:56:51.533335  100380 main.go:141] libmachine: (ha-783738) waiting for IP...
	I0217 11:56:51.534198  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:56:51.534571  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:56:51.534631  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:56:51.534554  100416 retry.go:31] will retry after 214.112758ms: waiting for domain to come up
	I0217 11:56:51.750038  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:56:51.750535  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:56:51.750587  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:56:51.750528  100416 retry.go:31] will retry after 287.575076ms: waiting for domain to come up
	I0217 11:56:52.040019  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:56:52.040473  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:56:52.040515  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:56:52.040452  100416 retry.go:31] will retry after 303.389275ms: waiting for domain to come up
	I0217 11:56:52.345057  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:56:52.345400  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:56:52.345452  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:56:52.345383  100416 retry.go:31] will retry after 580.610288ms: waiting for domain to come up
	I0217 11:56:52.927102  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:56:52.927623  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:56:52.927663  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:56:52.927596  100416 retry.go:31] will retry after 470.88869ms: waiting for domain to come up
	I0217 11:56:53.400293  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:56:53.400698  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:56:53.400725  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:56:53.400636  100416 retry.go:31] will retry after 645.102407ms: waiting for domain to come up
	I0217 11:56:54.046798  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:56:54.047309  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:56:54.047365  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:56:54.047265  100416 retry.go:31] will retry after 993.016218ms: waiting for domain to come up
	I0217 11:56:55.041450  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:56:55.041808  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:56:55.041828  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:56:55.041790  100416 retry.go:31] will retry after 1.096274529s: waiting for domain to come up
	I0217 11:56:56.139475  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:56:56.139892  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:56:56.139957  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:56:56.139882  100416 retry.go:31] will retry after 1.840421804s: waiting for domain to come up
	I0217 11:56:57.981618  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:56:57.982040  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:56:57.982068  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:56:57.981979  100416 retry.go:31] will retry after 1.8969141s: waiting for domain to come up
	I0217 11:56:59.881026  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:56:59.881535  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:56:59.881570  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:56:59.881471  100416 retry.go:31] will retry after 1.890240518s: waiting for domain to come up
	I0217 11:57:01.773274  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:01.773728  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:57:01.773779  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:57:01.773696  100416 retry.go:31] will retry after 3.046762911s: waiting for domain to come up
	I0217 11:57:04.823999  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:04.824458  100380 main.go:141] libmachine: (ha-783738) DBG | unable to find current IP address of domain ha-783738 in network mk-ha-783738
	I0217 11:57:04.824497  100380 main.go:141] libmachine: (ha-783738) DBG | I0217 11:57:04.824453  100416 retry.go:31] will retry after 3.819063496s: waiting for domain to come up
	I0217 11:57:08.647831  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:08.648309  100380 main.go:141] libmachine: (ha-783738) found domain IP: 192.168.39.249
	I0217 11:57:08.648334  100380 main.go:141] libmachine: (ha-783738) reserving static IP address...
	I0217 11:57:08.648347  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has current primary IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:08.648799  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "ha-783738", mac: "52:54:00:fb:6f:65", ip: "192.168.39.249"} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:08.648824  100380 main.go:141] libmachine: (ha-783738) DBG | skip adding static IP to network mk-ha-783738 - found existing host DHCP lease matching {name: "ha-783738", mac: "52:54:00:fb:6f:65", ip: "192.168.39.249"}
	I0217 11:57:08.648835  100380 main.go:141] libmachine: (ha-783738) reserved static IP address 192.168.39.249 for domain ha-783738
	I0217 11:57:08.648846  100380 main.go:141] libmachine: (ha-783738) waiting for SSH...
	I0217 11:57:08.648862  100380 main.go:141] libmachine: (ha-783738) DBG | Getting to WaitForSSH function...
	I0217 11:57:08.650828  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:08.651193  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:08.651224  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:08.651387  100380 main.go:141] libmachine: (ha-783738) DBG | Using SSH client type: external
	I0217 11:57:08.651414  100380 main.go:141] libmachine: (ha-783738) DBG | Using SSH private key: /home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738/id_rsa (-rw-------)
	I0217 11:57:08.651435  100380 main.go:141] libmachine: (ha-783738) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.249 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0217 11:57:08.651464  100380 main.go:141] libmachine: (ha-783738) DBG | About to run SSH command:
	I0217 11:57:08.651480  100380 main.go:141] libmachine: (ha-783738) DBG | exit 0
	I0217 11:57:08.776922  100380 main.go:141] libmachine: (ha-783738) DBG | SSH cmd err, output: <nil>: 
	I0217 11:57:08.777326  100380 main.go:141] libmachine: (ha-783738) Calling .GetConfigRaw
	I0217 11:57:08.777959  100380 main.go:141] libmachine: (ha-783738) Calling .GetIP
	I0217 11:57:08.780301  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:08.780692  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:08.780735  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:08.780948  100380 profile.go:143] Saving config to /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/config.json ...
	I0217 11:57:08.781137  100380 machine.go:93] provisionDockerMachine start ...
	I0217 11:57:08.781154  100380 main.go:141] libmachine: (ha-783738) Calling .DriverName
	I0217 11:57:08.781442  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:08.783478  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:08.783868  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:08.783897  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:08.784048  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHPort
	I0217 11:57:08.784237  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:08.784393  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:08.784570  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHUsername
	I0217 11:57:08.784738  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:08.784917  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0217 11:57:08.784928  100380 main.go:141] libmachine: About to run SSH command:
	hostname
	I0217 11:57:08.889484  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0217 11:57:08.889525  100380 main.go:141] libmachine: (ha-783738) Calling .GetMachineName
	I0217 11:57:08.889783  100380 buildroot.go:166] provisioning hostname "ha-783738"
	I0217 11:57:08.889818  100380 main.go:141] libmachine: (ha-783738) Calling .GetMachineName
	I0217 11:57:08.890003  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:08.892666  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:08.893027  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:08.893060  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:08.893202  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHPort
	I0217 11:57:08.893391  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:08.893536  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:08.893661  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHUsername
	I0217 11:57:08.893787  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:08.893949  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0217 11:57:08.893960  100380 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-783738 && echo "ha-783738" | sudo tee /etc/hostname
	I0217 11:57:09.014626  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-783738
	
	I0217 11:57:09.014653  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:09.017274  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.017710  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:09.017744  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.017939  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHPort
	I0217 11:57:09.018131  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:09.018348  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:09.018473  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHUsername
	I0217 11:57:09.018701  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:09.018967  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0217 11:57:09.018994  100380 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-783738' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-783738/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-783738' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0217 11:57:09.133208  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0217 11:57:09.133247  100380 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20427-77349/.minikube CaCertPath:/home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20427-77349/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20427-77349/.minikube}
	I0217 11:57:09.133278  100380 buildroot.go:174] setting up certificates
	I0217 11:57:09.133295  100380 provision.go:84] configureAuth start
	I0217 11:57:09.133331  100380 main.go:141] libmachine: (ha-783738) Calling .GetMachineName
	I0217 11:57:09.133680  100380 main.go:141] libmachine: (ha-783738) Calling .GetIP
	I0217 11:57:09.136393  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.136746  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:09.136771  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.136918  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:09.139192  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.139545  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:09.139583  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.139699  100380 provision.go:143] copyHostCerts
	I0217 11:57:09.139734  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20427-77349/.minikube/ca.pem
	I0217 11:57:09.139786  100380 exec_runner.go:144] found /home/jenkins/minikube-integration/20427-77349/.minikube/ca.pem, removing ...
	I0217 11:57:09.139804  100380 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20427-77349/.minikube/ca.pem
	I0217 11:57:09.139883  100380 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20427-77349/.minikube/ca.pem (1082 bytes)
	I0217 11:57:09.139996  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20427-77349/.minikube/cert.pem
	I0217 11:57:09.140023  100380 exec_runner.go:144] found /home/jenkins/minikube-integration/20427-77349/.minikube/cert.pem, removing ...
	I0217 11:57:09.140030  100380 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20427-77349/.minikube/cert.pem
	I0217 11:57:09.140079  100380 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20427-77349/.minikube/cert.pem (1123 bytes)
	I0217 11:57:09.140159  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20427-77349/.minikube/key.pem
	I0217 11:57:09.140184  100380 exec_runner.go:144] found /home/jenkins/minikube-integration/20427-77349/.minikube/key.pem, removing ...
	I0217 11:57:09.140191  100380 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20427-77349/.minikube/key.pem
	I0217 11:57:09.140228  100380 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20427-77349/.minikube/key.pem (1675 bytes)
	I0217 11:57:09.140314  100380 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20427-77349/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca-key.pem org=jenkins.ha-783738 san=[127.0.0.1 192.168.39.249 ha-783738 localhost minikube]
	I0217 11:57:09.269804  100380 provision.go:177] copyRemoteCerts
	I0217 11:57:09.269900  100380 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0217 11:57:09.269935  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:09.272592  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.272916  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:09.272945  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.273095  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHPort
	I0217 11:57:09.273282  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:09.273464  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHUsername
	I0217 11:57:09.273600  100380 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738/id_rsa Username:docker}
	I0217 11:57:09.355256  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0217 11:57:09.355331  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0217 11:57:09.378132  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0217 11:57:09.378243  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0217 11:57:09.399749  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0217 11:57:09.399830  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0217 11:57:09.421183  100380 provision.go:87] duration metric: took 287.855291ms to configureAuth
	I0217 11:57:09.421207  100380 buildroot.go:189] setting minikube options for container-runtime
	I0217 11:57:09.421432  100380 config.go:182] Loaded profile config "ha-783738": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0217 11:57:09.421460  100380 main.go:141] libmachine: (ha-783738) Calling .DriverName
	I0217 11:57:09.421765  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:09.424701  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.425141  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:09.425173  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.425370  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHPort
	I0217 11:57:09.425557  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:09.425734  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:09.425883  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHUsername
	I0217 11:57:09.426059  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:09.426283  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0217 11:57:09.426297  100380 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0217 11:57:09.534976  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0217 11:57:09.535006  100380 buildroot.go:70] root file system type: tmpfs
	I0217 11:57:09.535125  100380 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0217 11:57:09.535163  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:09.537739  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.538108  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:09.538126  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.538307  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHPort
	I0217 11:57:09.538481  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:09.538662  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:09.538802  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHUsername
	I0217 11:57:09.538949  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:09.539142  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0217 11:57:09.539243  100380 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0217 11:57:09.658326  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0217 11:57:09.658371  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:09.661372  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.661838  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:09.661875  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:09.662085  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHPort
	I0217 11:57:09.662300  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:09.662435  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:09.662559  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHUsername
	I0217 11:57:09.662707  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:09.662897  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0217 11:57:09.662913  100380 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0217 11:57:11.588699  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0217 11:57:11.588766  100380 machine.go:96] duration metric: took 2.807616414s to provisionDockerMachine
	I0217 11:57:11.588782  100380 start.go:293] postStartSetup for "ha-783738" (driver="kvm2")
	I0217 11:57:11.588792  100380 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0217 11:57:11.588810  100380 main.go:141] libmachine: (ha-783738) Calling .DriverName
	I0217 11:57:11.589177  100380 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0217 11:57:11.589221  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:11.592192  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:11.592596  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:11.592627  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:11.592785  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHPort
	I0217 11:57:11.592979  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:11.593170  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHUsername
	I0217 11:57:11.593334  100380 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738/id_rsa Username:docker}
	I0217 11:57:11.675232  100380 ssh_runner.go:195] Run: cat /etc/os-release
	I0217 11:57:11.679319  100380 info.go:137] Remote host: Buildroot 2023.02.9
	I0217 11:57:11.679347  100380 filesync.go:126] Scanning /home/jenkins/minikube-integration/20427-77349/.minikube/addons for local assets ...
	I0217 11:57:11.679434  100380 filesync.go:126] Scanning /home/jenkins/minikube-integration/20427-77349/.minikube/files for local assets ...
	I0217 11:57:11.679553  100380 filesync.go:149] local asset: /home/jenkins/minikube-integration/20427-77349/.minikube/files/etc/ssl/certs/845022.pem -> 845022.pem in /etc/ssl/certs
	I0217 11:57:11.679569  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/files/etc/ssl/certs/845022.pem -> /etc/ssl/certs/845022.pem
	I0217 11:57:11.679700  100380 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0217 11:57:11.688596  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/files/etc/ssl/certs/845022.pem --> /etc/ssl/certs/845022.pem (1708 bytes)
	I0217 11:57:11.712948  100380 start.go:296] duration metric: took 124.147315ms for postStartSetup
	I0217 11:57:11.713041  100380 main.go:141] libmachine: (ha-783738) Calling .DriverName
	I0217 11:57:11.713388  100380 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0217 11:57:11.713431  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:11.716109  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:11.716482  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:11.716509  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:11.716697  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHPort
	I0217 11:57:11.716902  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:11.717111  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHUsername
	I0217 11:57:11.717237  100380 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738/id_rsa Username:docker}
	I0217 11:57:11.799568  100380 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0217 11:57:11.799647  100380 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0217 11:57:11.840659  100380 fix.go:56] duration metric: took 21.521561421s for fixHost
	I0217 11:57:11.840710  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:11.843711  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:11.844159  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:11.844211  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:11.844334  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHPort
	I0217 11:57:11.844538  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:11.844685  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:11.844877  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHUsername
	I0217 11:57:11.845064  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:11.845292  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0217 11:57:11.845324  100380 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0217 11:57:11.961693  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739793431.919777749
	
	I0217 11:57:11.961720  100380 fix.go:216] guest clock: 1739793431.919777749
	I0217 11:57:11.961728  100380 fix.go:229] Guest: 2025-02-17 11:57:11.919777749 +0000 UTC Remote: 2025-02-17 11:57:11.840688548 +0000 UTC m=+21.663425668 (delta=79.089201ms)
	I0217 11:57:11.961764  100380 fix.go:200] guest clock delta is within tolerance: 79.089201ms
	I0217 11:57:11.961771  100380 start.go:83] releasing machines lock for "ha-783738", held for 21.642703542s
	I0217 11:57:11.961797  100380 main.go:141] libmachine: (ha-783738) Calling .DriverName
	I0217 11:57:11.962076  100380 main.go:141] libmachine: (ha-783738) Calling .GetIP
	I0217 11:57:11.964739  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:11.965072  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:11.965098  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:11.965245  100380 main.go:141] libmachine: (ha-783738) Calling .DriverName
	I0217 11:57:11.965780  100380 main.go:141] libmachine: (ha-783738) Calling .DriverName
	I0217 11:57:11.965938  100380 main.go:141] libmachine: (ha-783738) Calling .DriverName
	I0217 11:57:11.966020  100380 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0217 11:57:11.966085  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:11.966153  100380 ssh_runner.go:195] Run: cat /version.json
	I0217 11:57:11.966182  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:57:11.968710  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:11.968814  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:11.969180  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:11.969211  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:11.969228  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:11.969243  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:11.969400  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHPort
	I0217 11:57:11.969505  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHPort
	I0217 11:57:11.969573  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:11.969654  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:57:11.969705  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHUsername
	I0217 11:57:11.969780  100380 main.go:141] libmachine: (ha-783738) Calling .GetSSHUsername
	I0217 11:57:11.969855  100380 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738/id_rsa Username:docker}
	I0217 11:57:11.969896  100380 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738/id_rsa Username:docker}
	I0217 11:57:12.070993  100380 ssh_runner.go:195] Run: systemctl --version
	I0217 11:57:12.076962  100380 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0217 11:57:12.082069  100380 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0217 11:57:12.082164  100380 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0217 11:57:12.097308  100380 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0217 11:57:12.097353  100380 start.go:495] detecting cgroup driver to use...
	I0217 11:57:12.097502  100380 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0217 11:57:12.116857  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0217 11:57:12.128177  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0217 11:57:12.139383  100380 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0217 11:57:12.139433  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0217 11:57:12.150535  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0217 11:57:12.161824  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0217 11:57:12.173075  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0217 11:57:12.184735  100380 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0217 11:57:12.196065  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0217 11:57:12.206061  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0217 11:57:12.215826  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0217 11:57:12.225719  100380 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0217 11:57:12.234589  100380 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0217 11:57:12.234644  100380 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0217 11:57:12.244581  100380 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0217 11:57:12.253602  100380 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0217 11:57:12.359116  100380 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0217 11:57:12.382906  100380 start.go:495] detecting cgroup driver to use...
	I0217 11:57:12.383010  100380 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0217 11:57:12.408300  100380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0217 11:57:12.424027  100380 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0217 11:57:12.444833  100380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0217 11:57:12.457628  100380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0217 11:57:12.470140  100380 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0217 11:57:12.497764  100380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0217 11:57:12.511071  100380 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0217 11:57:12.529141  100380 ssh_runner.go:195] Run: which cri-dockerd
	I0217 11:57:12.532846  100380 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0217 11:57:12.541895  100380 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0217 11:57:12.557198  100380 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0217 11:57:12.670128  100380 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0217 11:57:12.796263  100380 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0217 11:57:12.796399  100380 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0217 11:57:12.812229  100380 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0217 11:57:12.923350  100380 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0217 11:57:15.351609  100380 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.428206669s)
	I0217 11:57:15.351699  100380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0217 11:57:15.364852  100380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0217 11:57:15.377423  100380 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0217 11:57:15.493635  100380 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0217 11:57:15.621524  100380 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0217 11:57:15.730858  100380 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0217 11:57:15.748138  100380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0217 11:57:15.761818  100380 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0217 11:57:15.881775  100380 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0217 11:57:15.960772  100380 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0217 11:57:15.960858  100380 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0217 11:57:15.966411  100380 start.go:563] Will wait 60s for crictl version
	I0217 11:57:15.966517  100380 ssh_runner.go:195] Run: which crictl
	I0217 11:57:15.974036  100380 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0217 11:57:16.011837  100380 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0217 11:57:16.011912  100380 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0217 11:57:16.036945  100380 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0217 11:57:16.060974  100380 out.go:235] * Preparing Kubernetes v1.32.1 on Docker 27.4.0 ...
	I0217 11:57:16.061031  100380 main.go:141] libmachine: (ha-783738) Calling .GetIP
	I0217 11:57:16.063810  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:16.064255  100380 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:57:01 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:57:16.064298  100380 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:57:16.064499  100380 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0217 11:57:16.068464  100380 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0217 11:57:16.080668  100380 kubeadm.go:883] updating cluster {Name:ha-783738 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-783738 Namespace:
default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.31 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.168 Port:0 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-
gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0217 11:57:16.080804  100380 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0217 11:57:16.080849  100380 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0217 11:57:16.098890  100380 docker.go:689] Got preloaded images: -- stdout --
	kindest/kindnetd:v20250214-acbabc1a
	registry.k8s.io/kube-apiserver:v1.32.1
	registry.k8s.io/kube-scheduler:v1.32.1
	registry.k8s.io/kube-controller-manager:v1.32.1
	registry.k8s.io/kube-proxy:v1.32.1
	ghcr.io/kube-vip/kube-vip:v0.8.9
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0217 11:57:16.098911  100380 docker.go:619] Images already preloaded, skipping extraction
	I0217 11:57:16.098974  100380 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0217 11:57:16.116506  100380 docker.go:689] Got preloaded images: -- stdout --
	kindest/kindnetd:v20250214-acbabc1a
	registry.k8s.io/kube-apiserver:v1.32.1
	registry.k8s.io/kube-scheduler:v1.32.1
	registry.k8s.io/kube-controller-manager:v1.32.1
	registry.k8s.io/kube-proxy:v1.32.1
	ghcr.io/kube-vip/kube-vip:v0.8.9
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0217 11:57:16.116540  100380 cache_images.go:84] Images are preloaded, skipping loading
	I0217 11:57:16.116556  100380 kubeadm.go:934] updating node { 192.168.39.249 8443 v1.32.1 docker true true} ...
	I0217 11:57:16.116703  100380 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-783738 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.249
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:ha-783738 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0217 11:57:16.116764  100380 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0217 11:57:16.164431  100380 cni.go:84] Creating CNI manager for ""
	I0217 11:57:16.164455  100380 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0217 11:57:16.164469  100380 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0217 11:57:16.164499  100380 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.249 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-783738 NodeName:ha-783738 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.249"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.249 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0217 11:57:16.164682  100380 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.249
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-783738"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.249"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.249"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0217 11:57:16.164704  100380 kube-vip.go:115] generating kube-vip config ...
	I0217 11:57:16.164766  100380 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0217 11:57:16.178981  100380 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0217 11:57:16.179102  100380 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.9
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0217 11:57:16.179161  100380 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0217 11:57:16.189237  100380 binaries.go:44] Found k8s binaries, skipping transfer
	I0217 11:57:16.189321  100380 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0217 11:57:16.198727  100380 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I0217 11:57:16.214787  100380 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0217 11:57:16.231014  100380 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I0217 11:57:16.246729  100380 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0217 11:57:16.261779  100380 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0217 11:57:16.265453  100380 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0217 11:57:16.276521  100380 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0217 11:57:16.384249  100380 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0217 11:57:16.401291  100380 certs.go:68] Setting up /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738 for IP: 192.168.39.249
	I0217 11:57:16.401328  100380 certs.go:194] generating shared ca certs ...
	I0217 11:57:16.401350  100380 certs.go:226] acquiring lock for ca certs: {Name:mk7093571229e43ae88bf2507ccc9fd2cd05388e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0217 11:57:16.401508  100380 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20427-77349/.minikube/ca.key
	I0217 11:57:16.401544  100380 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20427-77349/.minikube/proxy-client-ca.key
	I0217 11:57:16.401555  100380 certs.go:256] generating profile certs ...
	I0217 11:57:16.401635  100380 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/client.key
	I0217 11:57:16.401660  100380 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.key.1b1cbf3b
	I0217 11:57:16.401671  100380 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.crt.1b1cbf3b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.249 192.168.39.31 192.168.39.254]
	I0217 11:57:16.475033  100380 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.crt.1b1cbf3b ...
	I0217 11:57:16.475062  100380 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.crt.1b1cbf3b: {Name:mkcae1f9f128e66451afcd5b133e6826e9862cbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0217 11:57:16.475228  100380 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.key.1b1cbf3b ...
	I0217 11:57:16.475243  100380 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.key.1b1cbf3b: {Name:mk484c481609a3c2ed473dfecb8f5468118b1367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0217 11:57:16.475330  100380 certs.go:381] copying /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.crt.1b1cbf3b -> /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.crt
	I0217 11:57:16.475492  100380 certs.go:385] copying /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.key.1b1cbf3b -> /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.key
	I0217 11:57:16.475629  100380 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/proxy-client.key
	I0217 11:57:16.475644  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0217 11:57:16.475656  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0217 11:57:16.475671  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0217 11:57:16.475699  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0217 11:57:16.475714  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0217 11:57:16.475726  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0217 11:57:16.475737  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0217 11:57:16.475748  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0217 11:57:16.475800  100380 certs.go:484] found cert: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/84502.pem (1338 bytes)
	W0217 11:57:16.475831  100380 certs.go:480] ignoring /home/jenkins/minikube-integration/20427-77349/.minikube/certs/84502_empty.pem, impossibly tiny 0 bytes
	I0217 11:57:16.475839  100380 certs.go:484] found cert: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca-key.pem (1679 bytes)
	I0217 11:57:16.475861  100380 certs.go:484] found cert: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem (1082 bytes)
	I0217 11:57:16.475900  100380 certs.go:484] found cert: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/cert.pem (1123 bytes)
	I0217 11:57:16.475927  100380 certs.go:484] found cert: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/key.pem (1675 bytes)
	I0217 11:57:16.476002  100380 certs.go:484] found cert: /home/jenkins/minikube-integration/20427-77349/.minikube/files/etc/ssl/certs/845022.pem (1708 bytes)
	I0217 11:57:16.476031  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/84502.pem -> /usr/share/ca-certificates/84502.pem
	I0217 11:57:16.476046  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/files/etc/ssl/certs/845022.pem -> /usr/share/ca-certificates/845022.pem
	I0217 11:57:16.476058  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0217 11:57:16.476652  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0217 11:57:16.507138  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0217 11:57:16.534527  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0217 11:57:16.562922  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0217 11:57:16.587311  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0217 11:57:16.624087  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0217 11:57:16.662037  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0217 11:57:16.713619  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0217 11:57:16.756345  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/certs/84502.pem --> /usr/share/ca-certificates/84502.pem (1338 bytes)
	I0217 11:57:16.803520  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/files/etc/ssl/certs/845022.pem --> /usr/share/ca-certificates/845022.pem (1708 bytes)
	I0217 11:57:16.846879  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0217 11:57:16.920267  100380 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0217 11:57:16.950648  100380 ssh_runner.go:195] Run: openssl version
	I0217 11:57:16.958784  100380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84502.pem && ln -fs /usr/share/ca-certificates/84502.pem /etc/ssl/certs/84502.pem"
	I0217 11:57:16.987238  100380 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84502.pem
	I0217 11:57:16.994220  100380 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 17 11:42 /usr/share/ca-certificates/84502.pem
	I0217 11:57:16.994283  100380 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84502.pem
	I0217 11:57:17.016466  100380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84502.pem /etc/ssl/certs/51391683.0"
	I0217 11:57:17.039972  100380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/845022.pem && ln -fs /usr/share/ca-certificates/845022.pem /etc/ssl/certs/845022.pem"
	I0217 11:57:17.061818  100380 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/845022.pem
	I0217 11:57:17.068988  100380 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 17 11:42 /usr/share/ca-certificates/845022.pem
	I0217 11:57:17.069057  100380 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/845022.pem
	I0217 11:57:17.075953  100380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/845022.pem /etc/ssl/certs/3ec20f2e.0"
	I0217 11:57:17.094161  100380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0217 11:57:17.111313  100380 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0217 11:57:17.116268  100380 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 17 11:35 /usr/share/ca-certificates/minikubeCA.pem
	I0217 11:57:17.116335  100380 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0217 11:57:17.122743  100380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0217 11:57:17.141827  100380 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0217 11:57:17.146771  100380 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0217 11:57:17.158301  100380 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0217 11:57:17.170200  100380 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0217 11:57:17.177413  100380 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0217 11:57:17.186556  100380 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0217 11:57:17.193933  100380 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0217 11:57:17.203839  100380 kubeadm.go:392] StartCluster: {Name:ha-783738 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-783738 Namespace:def
ault APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.31 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.168 Port:0 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gad
get:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0217 11:57:17.204089  100380 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0217 11:57:17.225257  100380 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0217 11:57:17.236858  100380 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0217 11:57:17.236876  100380 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0217 11:57:17.236920  100380 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0217 11:57:17.246285  100380 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0217 11:57:17.246828  100380 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-783738" does not appear in /home/jenkins/minikube-integration/20427-77349/kubeconfig
	I0217 11:57:17.246986  100380 kubeconfig.go:62] /home/jenkins/minikube-integration/20427-77349/kubeconfig needs updating (will repair): [kubeconfig missing "ha-783738" cluster setting kubeconfig missing "ha-783738" context setting]
	I0217 11:57:17.247367  100380 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20427-77349/kubeconfig: {Name:mka23a5c17f10bb58374e83755a2ac6a44464e11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0217 11:57:17.247895  100380 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/20427-77349/kubeconfig
	I0217 11:57:17.248117  100380 kapi.go:59] client config for ha-783738: &rest.Config{Host:"https://192.168.39.249:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/client.crt", KeyFile:"/home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/client.key", CAFile:"/home/jenkins/minikube-integration/20427-77349/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24df700), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0217 11:57:17.248591  100380 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0217 11:57:17.248610  100380 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0217 11:57:17.248615  100380 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0217 11:57:17.248619  100380 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0217 11:57:17.248634  100380 cert_rotation.go:140] Starting client certificate rotation controller
	I0217 11:57:17.249054  100380 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0217 11:57:17.258029  100380 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.249
	I0217 11:57:17.258053  100380 kubeadm.go:597] duration metric: took 21.170416ms to restartPrimaryControlPlane
	I0217 11:57:17.258062  100380 kubeadm.go:394] duration metric: took 54.240079ms to StartCluster
	I0217 11:57:17.258077  100380 settings.go:142] acquiring lock: {Name:mkf730c657b1c2d5a481dbeb02dabe7dfa17f2d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0217 11:57:17.258150  100380 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20427-77349/kubeconfig
	I0217 11:57:17.258639  100380 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20427-77349/kubeconfig: {Name:mka23a5c17f10bb58374e83755a2ac6a44464e11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0217 11:57:17.258848  100380 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0217 11:57:17.258870  100380 start.go:241] waiting for startup goroutines ...
	I0217 11:57:17.258884  100380 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0217 11:57:17.259112  100380 config.go:182] Loaded profile config "ha-783738": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0217 11:57:17.261397  100380 out.go:177] * Enabled addons: 
	I0217 11:57:17.262668  100380 addons.go:514] duration metric: took 3.785415ms for enable addons: enabled=[]
	I0217 11:57:17.262703  100380 start.go:246] waiting for cluster config update ...
	I0217 11:57:17.262713  100380 start.go:255] writing updated cluster config ...
	I0217 11:57:17.264127  100380 out.go:201] 
	I0217 11:57:17.265577  100380 config.go:182] Loaded profile config "ha-783738": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0217 11:57:17.265703  100380 profile.go:143] Saving config to /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/config.json ...
	I0217 11:57:17.267570  100380 out.go:177] * Starting "ha-783738-m02" control-plane node in "ha-783738" cluster
	I0217 11:57:17.268921  100380 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0217 11:57:17.268950  100380 cache.go:56] Caching tarball of preloaded images
	I0217 11:57:17.269061  100380 preload.go:172] Found /home/jenkins/minikube-integration/20427-77349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0217 11:57:17.269074  100380 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on docker
	I0217 11:57:17.269250  100380 profile.go:143] Saving config to /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/config.json ...
	I0217 11:57:17.269484  100380 start.go:360] acquireMachinesLock for ha-783738-m02: {Name:mk05ba8323ae77ab7dcc14c378d65810d956fdc0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0217 11:57:17.269554  100380 start.go:364] duration metric: took 46.103µs to acquireMachinesLock for "ha-783738-m02"
	I0217 11:57:17.269576  100380 start.go:96] Skipping create...Using existing machine configuration
	I0217 11:57:17.269584  100380 fix.go:54] fixHost starting: m02
	I0217 11:57:17.269846  100380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0217 11:57:17.269891  100380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0217 11:57:17.284961  100380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45093
	I0217 11:57:17.285438  100380 main.go:141] libmachine: () Calling .GetVersion
	I0217 11:57:17.285964  100380 main.go:141] libmachine: Using API Version  1
	I0217 11:57:17.285991  100380 main.go:141] libmachine: () Calling .SetConfigRaw
	I0217 11:57:17.286358  100380 main.go:141] libmachine: () Calling .GetMachineName
	I0217 11:57:17.286562  100380 main.go:141] libmachine: (ha-783738-m02) Calling .DriverName
	I0217 11:57:17.286744  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetState
	I0217 11:57:17.288288  100380 fix.go:112] recreateIfNeeded on ha-783738-m02: state=Stopped err=<nil>
	I0217 11:57:17.288317  100380 main.go:141] libmachine: (ha-783738-m02) Calling .DriverName
	W0217 11:57:17.288473  100380 fix.go:138] unexpected machine state, will restart: <nil>
	I0217 11:57:17.290496  100380 out.go:177] * Restarting existing kvm2 VM for "ha-783738-m02" ...
	I0217 11:57:17.291737  100380 main.go:141] libmachine: (ha-783738-m02) Calling .Start
	I0217 11:57:17.291936  100380 main.go:141] libmachine: (ha-783738-m02) starting domain...
	I0217 11:57:17.291957  100380 main.go:141] libmachine: (ha-783738-m02) ensuring networks are active...
	I0217 11:57:17.292625  100380 main.go:141] libmachine: (ha-783738-m02) Ensuring network default is active
	I0217 11:57:17.292935  100380 main.go:141] libmachine: (ha-783738-m02) Ensuring network mk-ha-783738 is active
	I0217 11:57:17.293260  100380 main.go:141] libmachine: (ha-783738-m02) getting domain XML...
	I0217 11:57:17.293893  100380 main.go:141] libmachine: (ha-783738-m02) creating domain...
	I0217 11:57:18.506378  100380 main.go:141] libmachine: (ha-783738-m02) waiting for IP...
	I0217 11:57:18.507364  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:18.507881  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:18.507974  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:18.507878  100573 retry.go:31] will retry after 190.071186ms: waiting for domain to come up
	I0217 11:57:18.699203  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:18.699617  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:18.699682  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:18.699590  100573 retry.go:31] will retry after 254.022024ms: waiting for domain to come up
	I0217 11:57:18.955132  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:18.955578  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:18.955602  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:18.955533  100573 retry.go:31] will retry after 332.594264ms: waiting for domain to come up
	I0217 11:57:19.290041  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:19.290494  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:19.290519  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:19.290472  100573 retry.go:31] will retry after 550.484931ms: waiting for domain to come up
	I0217 11:57:19.842363  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:19.842844  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:19.842873  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:19.842822  100573 retry.go:31] will retry after 743.60757ms: waiting for domain to come up
	I0217 11:57:20.587667  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:20.588025  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:20.588058  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:20.587981  100573 retry.go:31] will retry after 701.750144ms: waiting for domain to come up
	I0217 11:57:21.290980  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:21.291500  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:21.291530  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:21.291445  100573 retry.go:31] will retry after 755.313925ms: waiting for domain to come up
	I0217 11:57:22.047876  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:22.048286  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:22.048318  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:22.048246  100573 retry.go:31] will retry after 1.338224716s: waiting for domain to come up
	I0217 11:57:23.388238  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:23.388759  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:23.388796  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:23.388727  100573 retry.go:31] will retry after 1.367661407s: waiting for domain to come up
	I0217 11:57:24.758376  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:24.758722  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:24.758764  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:24.758718  100573 retry.go:31] will retry after 2.08548116s: waiting for domain to come up
	I0217 11:57:26.846621  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:26.847150  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:26.847253  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:26.847166  100573 retry.go:31] will retry after 1.933968455s: waiting for domain to come up
	I0217 11:57:28.782369  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:28.782785  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:28.782815  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:28.782752  100573 retry.go:31] will retry after 3.162167749s: waiting for domain to come up
	I0217 11:57:31.947188  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:31.947578  100380 main.go:141] libmachine: (ha-783738-m02) DBG | unable to find current IP address of domain ha-783738-m02 in network mk-ha-783738
	I0217 11:57:31.947603  100380 main.go:141] libmachine: (ha-783738-m02) DBG | I0217 11:57:31.947545  100573 retry.go:31] will retry after 3.924986004s: waiting for domain to come up
	I0217 11:57:35.877102  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:35.877437  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has current primary IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:35.877460  100380 main.go:141] libmachine: (ha-783738-m02) found domain IP: 192.168.39.31
	I0217 11:57:35.877473  100380 main.go:141] libmachine: (ha-783738-m02) reserving static IP address...
	I0217 11:57:35.877915  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "ha-783738-m02", mac: "52:54:00:06:81:a2", ip: "192.168.39.31"} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:35.877942  100380 main.go:141] libmachine: (ha-783738-m02) DBG | skip adding static IP to network mk-ha-783738 - found existing host DHCP lease matching {name: "ha-783738-m02", mac: "52:54:00:06:81:a2", ip: "192.168.39.31"}
	I0217 11:57:35.877960  100380 main.go:141] libmachine: (ha-783738-m02) reserved static IP address 192.168.39.31 for domain ha-783738-m02
	I0217 11:57:35.877972  100380 main.go:141] libmachine: (ha-783738-m02) waiting for SSH...
	I0217 11:57:35.877983  100380 main.go:141] libmachine: (ha-783738-m02) DBG | Getting to WaitForSSH function...
	I0217 11:57:35.880382  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:35.880801  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:35.880830  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:35.880903  100380 main.go:141] libmachine: (ha-783738-m02) DBG | Using SSH client type: external
	I0217 11:57:35.880925  100380 main.go:141] libmachine: (ha-783738-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738-m02/id_rsa (-rw-------)
	I0217 11:57:35.880955  100380 main.go:141] libmachine: (ha-783738-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.31 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0217 11:57:35.880970  100380 main.go:141] libmachine: (ha-783738-m02) DBG | About to run SSH command:
	I0217 11:57:35.880982  100380 main.go:141] libmachine: (ha-783738-m02) DBG | exit 0
	I0217 11:57:36.005182  100380 main.go:141] libmachine: (ha-783738-m02) DBG | SSH cmd err, output: <nil>: 
	I0217 11:57:36.005527  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetConfigRaw
	I0217 11:57:36.006216  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetIP
	I0217 11:57:36.008704  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.009084  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:36.009118  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.009443  100380 profile.go:143] Saving config to /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/ha-783738/config.json ...
	I0217 11:57:36.009639  100380 machine.go:93] provisionDockerMachine start ...
	I0217 11:57:36.009657  100380 main.go:141] libmachine: (ha-783738-m02) Calling .DriverName
	I0217 11:57:36.009816  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	I0217 11:57:36.011849  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.012187  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:36.012218  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.012360  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHPort
	I0217 11:57:36.012557  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.012710  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.012836  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHUsername
	I0217 11:57:36.012947  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:36.013115  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0217 11:57:36.013130  100380 main.go:141] libmachine: About to run SSH command:
	hostname
	I0217 11:57:36.113056  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0217 11:57:36.113093  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetMachineName
	I0217 11:57:36.113376  100380 buildroot.go:166] provisioning hostname "ha-783738-m02"
	I0217 11:57:36.113403  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetMachineName
	I0217 11:57:36.113566  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	I0217 11:57:36.116233  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.116606  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:36.116634  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.116762  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHPort
	I0217 11:57:36.116907  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.117025  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.117242  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHUsername
	I0217 11:57:36.117464  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:36.117681  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0217 11:57:36.117699  100380 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-783738-m02 && echo "ha-783738-m02" | sudo tee /etc/hostname
	I0217 11:57:36.230628  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-783738-m02
	
	I0217 11:57:36.230670  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	I0217 11:57:36.233644  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.233991  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:36.234015  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.234196  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHPort
	I0217 11:57:36.234491  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.234686  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.234856  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHUsername
	I0217 11:57:36.235006  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:36.235194  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0217 11:57:36.235211  100380 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-783738-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-783738-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-783738-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0217 11:57:36.341290  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0217 11:57:36.341332  100380 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20427-77349/.minikube CaCertPath:/home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20427-77349/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20427-77349/.minikube}
	I0217 11:57:36.341348  100380 buildroot.go:174] setting up certificates
	I0217 11:57:36.341360  100380 provision.go:84] configureAuth start
	I0217 11:57:36.341373  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetMachineName
	I0217 11:57:36.341646  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetIP
	I0217 11:57:36.344453  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.344944  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:36.344981  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.345158  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	I0217 11:57:36.347416  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.347719  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:36.347744  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.347910  100380 provision.go:143] copyHostCerts
	I0217 11:57:36.347943  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20427-77349/.minikube/ca.pem
	I0217 11:57:36.347989  100380 exec_runner.go:144] found /home/jenkins/minikube-integration/20427-77349/.minikube/ca.pem, removing ...
	I0217 11:57:36.347999  100380 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20427-77349/.minikube/ca.pem
	I0217 11:57:36.348065  100380 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20427-77349/.minikube/ca.pem (1082 bytes)
	I0217 11:57:36.348156  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20427-77349/.minikube/cert.pem
	I0217 11:57:36.348190  100380 exec_runner.go:144] found /home/jenkins/minikube-integration/20427-77349/.minikube/cert.pem, removing ...
	I0217 11:57:36.348200  100380 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20427-77349/.minikube/cert.pem
	I0217 11:57:36.348229  100380 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20427-77349/.minikube/cert.pem (1123 bytes)
	I0217 11:57:36.348286  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20427-77349/.minikube/key.pem
	I0217 11:57:36.348310  100380 exec_runner.go:144] found /home/jenkins/minikube-integration/20427-77349/.minikube/key.pem, removing ...
	I0217 11:57:36.348320  100380 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20427-77349/.minikube/key.pem
	I0217 11:57:36.348347  100380 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20427-77349/.minikube/key.pem (1675 bytes)
	I0217 11:57:36.348413  100380 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20427-77349/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca-key.pem org=jenkins.ha-783738-m02 san=[127.0.0.1 192.168.39.31 ha-783738-m02 localhost minikube]
	I0217 11:57:36.476199  100380 provision.go:177] copyRemoteCerts
	I0217 11:57:36.476256  100380 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0217 11:57:36.476280  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	I0217 11:57:36.479126  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.479497  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:36.479529  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.479677  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHPort
	I0217 11:57:36.479868  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.480073  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHUsername
	I0217 11:57:36.480258  100380 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738-m02/id_rsa Username:docker}
	I0217 11:57:36.558954  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0217 11:57:36.559023  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0217 11:57:36.581755  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0217 11:57:36.581816  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0217 11:57:36.604328  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0217 11:57:36.604411  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0217 11:57:36.626183  100380 provision.go:87] duration metric: took 284.807453ms to configureAuth
	I0217 11:57:36.626219  100380 buildroot.go:189] setting minikube options for container-runtime
	I0217 11:57:36.626492  100380 config.go:182] Loaded profile config "ha-783738": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0217 11:57:36.626522  100380 main.go:141] libmachine: (ha-783738-m02) Calling .DriverName
	I0217 11:57:36.626768  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	I0217 11:57:36.629194  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.629569  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:36.629594  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.629740  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHPort
	I0217 11:57:36.629904  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.630077  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.630201  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHUsername
	I0217 11:57:36.630389  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:36.630601  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0217 11:57:36.630614  100380 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0217 11:57:36.730964  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0217 11:57:36.730995  100380 buildroot.go:70] root file system type: tmpfs
	I0217 11:57:36.731148  100380 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0217 11:57:36.731184  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	I0217 11:57:36.733718  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.734119  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:36.734150  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.734340  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHPort
	I0217 11:57:36.734539  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.734714  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.734847  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHUsername
	I0217 11:57:36.734986  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:36.735198  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0217 11:57:36.735304  100380 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.39.249"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0217 11:57:36.846599  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.39.249
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0217 11:57:36.846633  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	I0217 11:57:36.849370  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.849714  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:36.849733  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:36.849923  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHPort
	I0217 11:57:36.850116  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.850290  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:36.850443  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHUsername
	I0217 11:57:36.850608  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:36.850788  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0217 11:57:36.850805  100380 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0217 11:57:38.700010  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0217 11:57:38.700036  100380 machine.go:96] duration metric: took 2.690384734s to provisionDockerMachine
	I0217 11:57:38.700051  100380 start.go:293] postStartSetup for "ha-783738-m02" (driver="kvm2")
	I0217 11:57:38.700060  100380 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0217 11:57:38.700075  100380 main.go:141] libmachine: (ha-783738-m02) Calling .DriverName
	I0217 11:57:38.700389  100380 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0217 11:57:38.700425  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	I0217 11:57:38.703068  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:38.703435  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:38.703465  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:38.703605  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHPort
	I0217 11:57:38.703807  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:38.703952  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHUsername
	I0217 11:57:38.704102  100380 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738-m02/id_rsa Username:docker}
	I0217 11:57:38.783381  100380 ssh_runner.go:195] Run: cat /etc/os-release
	I0217 11:57:38.787188  100380 info.go:137] Remote host: Buildroot 2023.02.9
	I0217 11:57:38.787215  100380 filesync.go:126] Scanning /home/jenkins/minikube-integration/20427-77349/.minikube/addons for local assets ...
	I0217 11:57:38.787270  100380 filesync.go:126] Scanning /home/jenkins/minikube-integration/20427-77349/.minikube/files for local assets ...
	I0217 11:57:38.787341  100380 filesync.go:149] local asset: /home/jenkins/minikube-integration/20427-77349/.minikube/files/etc/ssl/certs/845022.pem -> 845022.pem in /etc/ssl/certs
	I0217 11:57:38.787352  100380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20427-77349/.minikube/files/etc/ssl/certs/845022.pem -> /etc/ssl/certs/845022.pem
	I0217 11:57:38.787430  100380 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0217 11:57:38.796091  100380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-77349/.minikube/files/etc/ssl/certs/845022.pem --> /etc/ssl/certs/845022.pem (1708 bytes)
	I0217 11:57:38.817716  100380 start.go:296] duration metric: took 117.649565ms for postStartSetup
	I0217 11:57:38.817759  100380 main.go:141] libmachine: (ha-783738-m02) Calling .DriverName
	I0217 11:57:38.818052  100380 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0217 11:57:38.818087  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	I0217 11:57:38.820354  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:38.820669  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:38.820694  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:38.820809  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHPort
	I0217 11:57:38.820978  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:38.821138  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHUsername
	I0217 11:57:38.821273  100380 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738-m02/id_rsa Username:docker}
	I0217 11:57:38.900214  100380 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0217 11:57:38.900294  100380 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0217 11:57:38.959273  100380 fix.go:56] duration metric: took 21.689681729s for fixHost
	I0217 11:57:38.959327  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	I0217 11:57:38.961853  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:38.962326  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:38.962364  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:38.962591  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHPort
	I0217 11:57:38.962788  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:38.962952  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:38.963062  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHUsername
	I0217 11:57:38.963238  100380 main.go:141] libmachine: Using SSH client type: native
	I0217 11:57:38.963408  100380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I0217 11:57:38.963419  100380 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0217 11:57:39.071315  100380 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739793459.049434891
	
	I0217 11:57:39.071339  100380 fix.go:216] guest clock: 1739793459.049434891
	I0217 11:57:39.071349  100380 fix.go:229] Guest: 2025-02-17 11:57:39.049434891 +0000 UTC Remote: 2025-02-17 11:57:38.959302801 +0000 UTC m=+48.782039917 (delta=90.13209ms)
	I0217 11:57:39.071366  100380 fix.go:200] guest clock delta is within tolerance: 90.13209ms
	I0217 11:57:39.071371  100380 start.go:83] releasing machines lock for "ha-783738-m02", held for 21.801804436s
	I0217 11:57:39.071393  100380 main.go:141] libmachine: (ha-783738-m02) Calling .DriverName
	I0217 11:57:39.071600  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetIP
	I0217 11:57:39.074321  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:39.074707  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:39.074736  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:39.076949  100380 out.go:177] * Found network options:
	I0217 11:57:39.078428  100380 out.go:177]   - NO_PROXY=192.168.39.249
	W0217 11:57:39.079686  100380 proxy.go:119] fail to check proxy env: Error ip not in block
	I0217 11:57:39.079714  100380 main.go:141] libmachine: (ha-783738-m02) Calling .DriverName
	I0217 11:57:39.080218  100380 main.go:141] libmachine: (ha-783738-m02) Calling .DriverName
	I0217 11:57:39.080403  100380 main.go:141] libmachine: (ha-783738-m02) Calling .DriverName
	I0217 11:57:39.080510  100380 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0217 11:57:39.080551  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	W0217 11:57:39.080631  100380 proxy.go:119] fail to check proxy env: Error ip not in block
	I0217 11:57:39.080722  100380 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0217 11:57:39.080748  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHHostname
	I0217 11:57:39.083432  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:39.083453  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:39.083887  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:39.083914  100380 main.go:141] libmachine: (ha-783738-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:81:a2", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:53:18 +0000 UTC Type:0 Mac:52:54:00:06:81:a2 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-783738-m02 Clientid:01:52:54:00:06:81:a2}
	I0217 11:57:39.083933  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:39.083949  100380 main.go:141] libmachine: (ha-783738-m02) DBG | domain ha-783738-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:06:81:a2 in network mk-ha-783738
	I0217 11:57:39.084264  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHPort
	I0217 11:57:39.084411  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:39.084597  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHUsername
	I0217 11:57:39.084609  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHPort
	I0217 11:57:39.084763  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHKeyPath
	I0217 11:57:39.084784  100380 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738-m02/id_rsa Username:docker}
	I0217 11:57:39.084915  100380 main.go:141] libmachine: (ha-783738-m02) Calling .GetSSHUsername
	I0217 11:57:39.085034  100380 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738-m02/id_rsa Username:docker}
	W0217 11:57:39.178061  100380 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0217 11:57:39.178137  100380 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0217 11:57:39.195964  100380 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0217 11:57:39.196001  100380 start.go:495] detecting cgroup driver to use...
	I0217 11:57:39.196148  100380 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0217 11:57:39.216666  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0217 11:57:39.226815  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0217 11:57:39.236611  100380 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0217 11:57:39.236669  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0217 11:57:39.246500  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0217 11:57:39.256691  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0217 11:57:39.266509  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0217 11:57:39.276231  100380 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0217 11:57:39.286298  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0217 11:57:39.296149  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0217 11:57:39.305984  100380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0217 11:57:39.315650  100380 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0217 11:57:39.324721  100380 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0217 11:57:39.324777  100380 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0217 11:57:39.334429  100380 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0217 11:57:39.343052  100380 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0217 11:57:39.458041  100380 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0217 11:57:39.483361  100380 start.go:495] detecting cgroup driver to use...
	I0217 11:57:39.483453  100380 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0217 11:57:39.501404  100380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0217 11:57:39.522545  100380 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0217 11:57:39.545214  100380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0217 11:57:39.557462  100380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0217 11:57:39.569445  100380 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0217 11:57:39.593668  100380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0217 11:57:39.606767  100380 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0217 11:57:39.623713  100380 ssh_runner.go:195] Run: which cri-dockerd
	I0217 11:57:39.627306  100380 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0217 11:57:39.635920  100380 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0217 11:57:39.651184  100380 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0217 11:57:39.767938  100380 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0217 11:57:39.884761  100380 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0217 11:57:39.884806  100380 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0217 11:57:39.900934  100380 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0217 11:57:40.013206  100380 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0217 11:58:41.088581  100380 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.075335279s)
	I0217 11:58:41.088680  100380 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0217 11:58:41.109373  100380 out.go:201] 
	W0217 11:58:41.110918  100380 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Feb 17 11:57:37 ha-783738-m02 systemd[1]: Starting Docker Application Container Engine...
	Feb 17 11:57:37 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:37.207555071Z" level=info msg="Starting up"
	Feb 17 11:57:37 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:37.208523706Z" level=info msg="containerd not running, starting managed containerd"
	Feb 17 11:57:37 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:37.209284365Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=499
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.234357473Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.253922324Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.254071326Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.254155313Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.254195097Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.254502645Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.254572700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.254826671Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.254880442Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.254926515Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.254965881Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.255209553Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.255502921Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.257578132Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.257723954Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.257912930Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.257960933Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.258214223Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.258292090Z" level=info msg="metadata content store policy set" policy=shared
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.262281766Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.262389757Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.262437193Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.262478052Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.262523730Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.262614966Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.262915194Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263049035Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263094390Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263137669Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263176270Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263217488Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263254710Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263292496Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263339613Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263377065Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263418085Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263453223Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263511094Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263549833Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263589341Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263631649Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263726157Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263766086Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263809930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263847665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263885358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263932212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.263972615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264020660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264063975Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264103157Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264158305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264194401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264230305Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264327104Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264417123Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264457690Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264499822Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264534568Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264575047Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264616722Z" level=info msg="NRI interface is disabled by configuration."
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.264938960Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.265032087Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.265091203Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Feb 17 11:57:37 ha-783738-m02 dockerd[499]: time="2025-02-17T11:57:37.265132167Z" level=info msg="containerd successfully booted in 0.032037s"
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.237803305Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.295143778Z" level=info msg="Loading containers: start."
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.484051173Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.565431513Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.632528889Z" level=info msg="Loading containers: done."
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.653906274Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.653941707Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.653962858Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.654196375Z" level=info msg="Daemon has completed initialization"
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.676178691Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 17 11:57:38 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:38.676315120Z" level=info msg="API listen on [::]:2376"
	Feb 17 11:57:38 ha-783738-m02 systemd[1]: Started Docker Application Container Engine.
	Feb 17 11:57:40 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:40.005718953Z" level=info msg="Processing signal 'terminated'"
	Feb 17 11:57:40 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:40.007186879Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Feb 17 11:57:40 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:40.007378782Z" level=info msg="Daemon shutdown complete"
	Feb 17 11:57:40 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:40.007446197Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Feb 17 11:57:40 ha-783738-m02 systemd[1]: Stopping Docker Application Container Engine...
	Feb 17 11:57:40 ha-783738-m02 dockerd[493]: time="2025-02-17T11:57:40.008214930Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Feb 17 11:57:41 ha-783738-m02 systemd[1]: docker.service: Deactivated successfully.
	Feb 17 11:57:41 ha-783738-m02 systemd[1]: Stopped Docker Application Container Engine.
	Feb 17 11:57:41 ha-783738-m02 systemd[1]: Starting Docker Application Container Engine...
	Feb 17 11:57:41 ha-783738-m02 dockerd[1120]: time="2025-02-17T11:57:41.051838490Z" level=info msg="Starting up"
	Feb 17 11:58:41 ha-783738-m02 dockerd[1120]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Feb 17 11:58:41 ha-783738-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Feb 17 11:58:41 ha-783738-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Feb 17 11:58:41 ha-783738-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0217 11:58:41.110964  100380 out.go:270] * 
	W0217 11:58:41.111815  100380 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0217 11:58:41.113412  100380 out.go:201] 
	
	
	==> Docker <==
	Feb 17 11:57:23 ha-783738 dockerd[1134]: time="2025-02-17T11:57:23.574956613Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 17 11:57:44 ha-783738 dockerd[1126]: time="2025-02-17T11:57:44.652472286Z" level=info msg="ignoring event" container=0eab009d1fe54d541fe5b166302e5af1a153e8aa37ad6a133704c1f40918f7c9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 17 11:57:44 ha-783738 dockerd[1134]: time="2025-02-17T11:57:44.653058320Z" level=info msg="shim disconnected" id=0eab009d1fe54d541fe5b166302e5af1a153e8aa37ad6a133704c1f40918f7c9 namespace=moby
	Feb 17 11:57:44 ha-783738 dockerd[1134]: time="2025-02-17T11:57:44.653483834Z" level=warning msg="cleaning up after shim disconnected" id=0eab009d1fe54d541fe5b166302e5af1a153e8aa37ad6a133704c1f40918f7c9 namespace=moby
	Feb 17 11:57:44 ha-783738 dockerd[1134]: time="2025-02-17T11:57:44.653545740Z" level=info msg="cleaning up dead shim" namespace=moby
	Feb 17 11:57:45 ha-783738 dockerd[1126]: time="2025-02-17T11:57:45.663576348Z" level=info msg="ignoring event" container=1683ded4f12ef91eea7067f33248f5185b17f0532a1c1480efe277bcd8accfe6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 17 11:57:45 ha-783738 dockerd[1134]: time="2025-02-17T11:57:45.664110377Z" level=info msg="shim disconnected" id=1683ded4f12ef91eea7067f33248f5185b17f0532a1c1480efe277bcd8accfe6 namespace=moby
	Feb 17 11:57:45 ha-783738 dockerd[1134]: time="2025-02-17T11:57:45.664165013Z" level=warning msg="cleaning up after shim disconnected" id=1683ded4f12ef91eea7067f33248f5185b17f0532a1c1480efe277bcd8accfe6 namespace=moby
	Feb 17 11:57:45 ha-783738 dockerd[1134]: time="2025-02-17T11:57:45.664175956Z" level=info msg="cleaning up dead shim" namespace=moby
	Feb 17 11:58:04 ha-783738 dockerd[1134]: time="2025-02-17T11:58:04.854960498Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 17 11:58:04 ha-783738 dockerd[1134]: time="2025-02-17T11:58:04.855123802Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 17 11:58:04 ha-783738 dockerd[1134]: time="2025-02-17T11:58:04.855151191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 17 11:58:04 ha-783738 dockerd[1134]: time="2025-02-17T11:58:04.855373177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 17 11:58:04 ha-783738 dockerd[1134]: time="2025-02-17T11:58:04.858152322Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 17 11:58:04 ha-783738 dockerd[1134]: time="2025-02-17T11:58:04.858222102Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 17 11:58:04 ha-783738 dockerd[1134]: time="2025-02-17T11:58:04.858232103Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 17 11:58:04 ha-783738 dockerd[1134]: time="2025-02-17T11:58:04.858372930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 17 11:58:25 ha-783738 dockerd[1126]: time="2025-02-17T11:58:25.325613613Z" level=info msg="ignoring event" container=0d8dd6abc6b0262f0e2de062685df6bbc87187dd14023d0fd12b894f48bd2001 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 17 11:58:25 ha-783738 dockerd[1134]: time="2025-02-17T11:58:25.326644755Z" level=info msg="shim disconnected" id=0d8dd6abc6b0262f0e2de062685df6bbc87187dd14023d0fd12b894f48bd2001 namespace=moby
	Feb 17 11:58:25 ha-783738 dockerd[1134]: time="2025-02-17T11:58:25.326737271Z" level=warning msg="cleaning up after shim disconnected" id=0d8dd6abc6b0262f0e2de062685df6bbc87187dd14023d0fd12b894f48bd2001 namespace=moby
	Feb 17 11:58:25 ha-783738 dockerd[1134]: time="2025-02-17T11:58:25.326756884Z" level=info msg="cleaning up dead shim" namespace=moby
	Feb 17 11:58:26 ha-783738 dockerd[1126]: time="2025-02-17T11:58:26.334899301Z" level=info msg="ignoring event" container=2e90f752fdc0601abb5401e228fa8355b97462cfd9f4dafb766f56eaf8e7b13a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 17 11:58:26 ha-783738 dockerd[1134]: time="2025-02-17T11:58:26.335703125Z" level=info msg="shim disconnected" id=2e90f752fdc0601abb5401e228fa8355b97462cfd9f4dafb766f56eaf8e7b13a namespace=moby
	Feb 17 11:58:26 ha-783738 dockerd[1134]: time="2025-02-17T11:58:26.335778773Z" level=warning msg="cleaning up after shim disconnected" id=2e90f752fdc0601abb5401e228fa8355b97462cfd9f4dafb766f56eaf8e7b13a namespace=moby
	Feb 17 11:58:26 ha-783738 dockerd[1134]: time="2025-02-17T11:58:26.335795547Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	2e90f752fdc06       019ee182b58e2       42 seconds ago       Exited              kube-controller-manager   4                   eeb1b6c34de35       kube-controller-manager-ha-783738
	0d8dd6abc6b02       95c0bda56fc4d       42 seconds ago       Exited              kube-apiserver            4                   a531c479908eb       kube-apiserver-ha-783738
	d524d25a3256e       2b0d6572d062c       About a minute ago   Running             kube-scheduler            2                   5633bc5aacc12       kube-scheduler-ha-783738
	2b8921c7d9f71       22f88dde2caa4       About a minute ago   Running             kube-vip                  1                   5f0329677cb70       kube-vip-ha-783738
	aeb757a6db075       a9e7e6b294baf       About a minute ago   Running             etcd                      2                   8c5c6a3fd0ba0       etcd-ha-783738
	8c236b02a8316       6e38f40d628db       4 minutes ago        Exited              storage-provisioner       3                   3b5478be91580       storage-provisioner
	f460be4118731       8c811b4aec35f       4 minutes ago        Exited              busybox                   1                   cd41205ee4990       busybox-58667487b6-mp8w2
	5caaef1da4142       e29f9c7391fd9       4 minutes ago        Exited              kube-proxy                1                   3bada7fe972b9       kube-proxy-pgwb4
	95f567924c5ee       c69fa2e9cbf5f       4 minutes ago        Exited              coredns                   1                   33c8d49183b1a       coredns-668d6bf9bc-bhrvt
	b4ccb469b39af       df3849d954c98       4 minutes ago        Exited              kindnet-cni               1                   bba5ce66a15dd       kindnet-t72ln
	b674f5b7afb38       c69fa2e9cbf5f       4 minutes ago        Exited              coredns                   1                   bfd8d387b7e96       coredns-668d6bf9bc-k5k72
	1395373a3c212       2b0d6572d062c       5 minutes ago        Exited              kube-scheduler            1                   fe3b7022472a7       kube-scheduler-ha-783738
	0644596c7e815       a9e7e6b294baf       5 minutes ago        Exited              etcd                      1                   a79f0d4414c0a       etcd-ha-783738
	905fe651f5a2d       22f88dde2caa4       5 minutes ago        Exited              kube-vip                  0                   6e727a24edb43       kube-vip-ha-783738
	
	
	==> coredns [95f567924c5e] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:54083 - 5538 "HINFO IN 6952713337195609451.67698316276633629. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.046526479s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[586752551]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Feb-2025 11:53:59.037) (total time: 30004ms):
	Trace[586752551]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30004ms (11:54:29.042)
	Trace[586752551]: [30.004932204s] [30.004932204s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[31748474]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Feb-2025 11:53:59.037) (total time: 30005ms):
	Trace[31748474]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30005ms (11:54:29.043)
	Trace[31748474]: [30.005260877s] [30.005260877s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1254162758]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Feb-2025 11:53:59.043) (total time: 30000ms):
	Trace[1254162758]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (11:54:29.044)
	Trace[1254162758]: [30.000938039s] [30.000938039s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b674f5b7afb3] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:47652 - 30454 "HINFO IN 3233588620932119307.6917908993167898246. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.026177844s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1310151553]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Feb-2025 11:53:59.042) (total time: 30001ms):
	Trace[1310151553]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (11:54:29.043)
	Trace[1310151553]: [30.001216976s] [30.001216976s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1951418715]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Feb-2025 11:53:59.039) (total time: 30005ms):
	Trace[1951418715]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30005ms (11:54:29.044)
	Trace[1951418715]: [30.005382964s] [30.005382964s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[606941673]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Feb-2025 11:53:59.038) (total time: 30006ms):
	Trace[606941673]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30006ms (11:54:29.044)
	Trace[606941673]: [30.006431575s] [30.006431575s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0217 11:58:47.119103    3265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0217 11:58:47.120788    3265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0217 11:58:47.122369    3265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0217 11:58:47.123689    3265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0217 11:58:47.124983    3265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Feb17 11:56] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052638] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037697] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.851026] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.992141] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[Feb17 11:57] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.664405] systemd-fstab-generator[476]: Ignoring "noauto" option for root device
	[  +0.058988] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058916] systemd-fstab-generator[488]: Ignoring "noauto" option for root device
	[  +2.348725] systemd-fstab-generator[1055]: Ignoring "noauto" option for root device
	[  +0.313948] systemd-fstab-generator[1092]: Ignoring "noauto" option for root device
	[  +0.110900] systemd-fstab-generator[1104]: Ignoring "noauto" option for root device
	[  +0.140552] systemd-fstab-generator[1118]: Ignoring "noauto" option for root device
	[  +2.263360] kauditd_printk_skb: 199 callbacks suppressed
	[  +0.301992] systemd-fstab-generator[1377]: Ignoring "noauto" option for root device
	[  +0.125509] systemd-fstab-generator[1390]: Ignoring "noauto" option for root device
	[  +0.118202] systemd-fstab-generator[1402]: Ignoring "noauto" option for root device
	[  +0.144218] systemd-fstab-generator[1417]: Ignoring "noauto" option for root device
	[  +0.508597] systemd-fstab-generator[1584]: Ignoring "noauto" option for root device
	[  +6.843964] kauditd_printk_skb: 180 callbacks suppressed
	[  +8.294455] kauditd_printk_skb: 40 callbacks suppressed
	
	
	==> etcd [0644596c7e81] <==
	{"level":"warn","ts":"2025-02-17T11:56:37.953386Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"416.799075ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/\" range_end:\"/registry/deployments0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2025-02-17T11:56:37.953402Z","caller":"traceutil/trace.go:171","msg":"trace[234534568] range","detail":"{range_begin:/registry/deployments/; range_end:/registry/deployments0; }","duration":"416.832899ms","start":"2025-02-17T11:56:37.536564Z","end":"2025-02-17T11:56:37.953396Z","steps":["trace[234534568] 'agreement among raft nodes before linearized reading'  (duration: 416.815476ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-17T11:56:37.953416Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-17T11:56:37.536510Z","time spent":"416.902435ms","remote":"127.0.0.1:58532","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":0,"response size":0,"request content":"key:\"/registry/deployments/\" range_end:\"/registry/deployments0\" count_only:true "}
	2025/02/17 11:56:37 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2025-02-17T11:56:37.953469Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.057072714s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"","error":"context canceled"}
	{"level":"info","ts":"2025-02-17T11:56:37.953479Z","caller":"traceutil/trace.go:171","msg":"trace[2020420396] range","detail":"{range_begin:/registry/health; range_end:; }","duration":"1.057490424s","start":"2025-02-17T11:56:36.895986Z","end":"2025-02-17T11:56:37.953476Z","steps":["trace[2020420396] 'agreement among raft nodes before linearized reading'  (duration: 1.057479846s)"],"step_count":1}
	{"level":"warn","ts":"2025-02-17T11:56:37.953491Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-17T11:56:36.895975Z","time spent":"1.057513489s","remote":"127.0.0.1:58120","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":0,"request content":"key:\"/registry/health\" "}
	2025/02/17 11:56:37 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2025-02-17T11:56:37.953557Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.889027766s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/podtemplates/\" range_end:\"/registry/podtemplates0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2025-02-17T11:56:37.953567Z","caller":"traceutil/trace.go:171","msg":"trace[159538693] range","detail":"{range_begin:/registry/podtemplates/; range_end:/registry/podtemplates0; }","duration":"1.889056203s","start":"2025-02-17T11:56:36.064508Z","end":"2025-02-17T11:56:37.953564Z","steps":["trace[159538693] 'agreement among raft nodes before linearized reading'  (duration: 1.88904446s)"],"step_count":1}
	{"level":"warn","ts":"2025-02-17T11:56:37.953580Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-17T11:56:36.064496Z","time spent":"1.889079683s","remote":"127.0.0.1:58254","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":0,"response size":0,"request content":"key:\"/registry/podtemplates/\" range_end:\"/registry/podtemplates0\" count_only:true "}
	2025/02/17 11:56:37 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2025-02-17T11:56:38.012328Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.249:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-02-17T11:56:38.012367Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.249:2379: use of closed network connection"}
	{"level":"info","ts":"2025-02-17T11:56:38.012413Z","caller":"etcdserver/server.go:1534","msg":"skipped leadership transfer; local server is not leader","local-member-id":"318ee90c3446d547","current-leader-member-id":"0"}
	{"level":"info","ts":"2025-02-17T11:56:38.012793Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"645ac05e9f2d470a"}
	{"level":"info","ts":"2025-02-17T11:56:38.012892Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"645ac05e9f2d470a"}
	{"level":"info","ts":"2025-02-17T11:56:38.012915Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"645ac05e9f2d470a"}
	{"level":"info","ts":"2025-02-17T11:56:38.012991Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"318ee90c3446d547","remote-peer-id":"645ac05e9f2d470a"}
	{"level":"info","ts":"2025-02-17T11:56:38.013022Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"318ee90c3446d547","remote-peer-id":"645ac05e9f2d470a"}
	{"level":"info","ts":"2025-02-17T11:56:38.013134Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"318ee90c3446d547","remote-peer-id":"645ac05e9f2d470a"}
	{"level":"info","ts":"2025-02-17T11:56:38.013145Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"645ac05e9f2d470a"}
	{"level":"info","ts":"2025-02-17T11:56:38.016636Z","caller":"embed/etcd.go:582","msg":"stopping serving peer traffic","address":"192.168.39.249:2380"}
	{"level":"info","ts":"2025-02-17T11:56:38.016720Z","caller":"embed/etcd.go:587","msg":"stopped serving peer traffic","address":"192.168.39.249:2380"}
	{"level":"info","ts":"2025-02-17T11:56:38.016728Z","caller":"embed/etcd.go:380","msg":"closed etcd server","name":"ha-783738","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.249:2380"],"advertise-client-urls":["https://192.168.39.249:2379"]}
	
	
	==> etcd [aeb757a6db07] <==
	{"level":"warn","ts":"2025-02-17T11:58:40.837045Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15368416165570069265,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-02-17T11:58:41.084143Z","caller":"etcdserver/server.go:2161","msg":"failed to publish local member to cluster through raft","local-member-id":"318ee90c3446d547","local-member-attributes":"{Name:ha-783738 ClientURLs:[https://192.168.39.249:2379]}","request-path":"/0/members/318ee90c3446d547/attributes","publish-timeout":"7s","error":"etcdserver: request timed out"}
	{"level":"warn","ts":"2025-02-17T11:58:41.337434Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15368416165570069265,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-02-17T11:58:41.827365Z","caller":"etcdserver/v3_server.go:932","msg":"timed out waiting for read index response (local node might have slow network)","timeout":"7s"}
	{"level":"warn","ts":"2025-02-17T11:58:41.827445Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"7.000504247s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: request timed out"}
	{"level":"info","ts":"2025-02-17T11:58:41.827469Z","caller":"traceutil/trace.go:171","msg":"trace[1958910963] range","detail":"{range_begin:; range_end:; }","duration":"7.000551306s","start":"2025-02-17T11:58:34.826907Z","end":"2025-02-17T11:58:41.827459Z","steps":["trace[1958910963] 'agreement among raft nodes before linearized reading'  (duration: 7.000502454s)"],"step_count":1}
	{"level":"error","ts":"2025-02-17T11:58:41.827501Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: request timed out\n[+]data_corruption ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2171\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2688\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:3142\nnet/http.(*conn).serve\n\tnet/http/server.go:2044"}
	{"level":"info","ts":"2025-02-17T11:58:42.436651Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 is starting a new election at term 3"}
	{"level":"info","ts":"2025-02-17T11:58:42.436750Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 became pre-candidate at term 3"}
	{"level":"info","ts":"2025-02-17T11:58:42.436772Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 received MsgPreVoteResp from 318ee90c3446d547 at term 3"}
	{"level":"info","ts":"2025-02-17T11:58:42.436803Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 [logterm: 3, index: 3030] sent MsgPreVote request to 645ac05e9f2d470a at term 3"}
	{"level":"info","ts":"2025-02-17T11:58:44.036156Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 is starting a new election at term 3"}
	{"level":"info","ts":"2025-02-17T11:58:44.036195Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 became pre-candidate at term 3"}
	{"level":"info","ts":"2025-02-17T11:58:44.036247Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 received MsgPreVoteResp from 318ee90c3446d547 at term 3"}
	{"level":"info","ts":"2025-02-17T11:58:44.036264Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 [logterm: 3, index: 3030] sent MsgPreVote request to 645ac05e9f2d470a at term 3"}
	{"level":"warn","ts":"2025-02-17T11:58:44.107198Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"645ac05e9f2d470a","rtt":"0s","error":"dial tcp 192.168.39.31:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-02-17T11:58:44.107261Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"645ac05e9f2d470a","rtt":"0s","error":"dial tcp 192.168.39.31:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-02-17T11:58:45.328421Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15368416165570069268,"retry-timeout":"500ms"}
	{"level":"info","ts":"2025-02-17T11:58:45.636407Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 is starting a new election at term 3"}
	{"level":"info","ts":"2025-02-17T11:58:45.636453Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 became pre-candidate at term 3"}
	{"level":"info","ts":"2025-02-17T11:58:45.636470Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 received MsgPreVoteResp from 318ee90c3446d547 at term 3"}
	{"level":"info","ts":"2025-02-17T11:58:45.636489Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 [logterm: 3, index: 3030] sent MsgPreVote request to 645ac05e9f2d470a at term 3"}
	{"level":"warn","ts":"2025-02-17T11:58:45.829119Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15368416165570069268,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-02-17T11:58:46.329441Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15368416165570069268,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-02-17T11:58:46.830542Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15368416165570069268,"retry-timeout":"500ms"}
	
	
	==> kernel <==
	 11:58:47 up 1 min,  0 users,  load average: 0.52, 0.30, 0.11
	Linux ha-783738 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [b4ccb469b39a] <==
	I0217 11:56:00.000922       1 main.go:324] Node ha-783738-m02 has CIDR [10.244.1.0/24] 
	I0217 11:56:00.001386       1 main.go:297] Handling node with IPs: map[192.168.39.216:{}]
	I0217 11:56:00.001417       1 main.go:324] Node ha-783738-m03 has CIDR [10.244.2.0/24] 
	I0217 11:56:00.002870       1 main.go:297] Handling node with IPs: map[192.168.39.168:{}]
	I0217 11:56:00.003089       1 main.go:324] Node ha-783738-m04 has CIDR [10.244.3.0/24] 
	I0217 11:56:10.003758       1 main.go:297] Handling node with IPs: map[192.168.39.31:{}]
	I0217 11:56:10.004120       1 main.go:324] Node ha-783738-m02 has CIDR [10.244.1.0/24] 
	I0217 11:56:10.004466       1 main.go:297] Handling node with IPs: map[192.168.39.216:{}]
	I0217 11:56:10.004579       1 main.go:324] Node ha-783738-m03 has CIDR [10.244.2.0/24] 
	I0217 11:56:10.004848       1 main.go:297] Handling node with IPs: map[192.168.39.168:{}]
	I0217 11:56:10.004993       1 main.go:324] Node ha-783738-m04 has CIDR [10.244.3.0/24] 
	I0217 11:56:10.005322       1 main.go:297] Handling node with IPs: map[192.168.39.249:{}]
	I0217 11:56:10.005440       1 main.go:301] handling current node
	I0217 11:56:20.008868       1 main.go:297] Handling node with IPs: map[192.168.39.249:{}]
	I0217 11:56:20.008992       1 main.go:301] handling current node
	I0217 11:56:20.009032       1 main.go:297] Handling node with IPs: map[192.168.39.31:{}]
	I0217 11:56:20.009107       1 main.go:324] Node ha-783738-m02 has CIDR [10.244.1.0/24] 
	I0217 11:56:20.009351       1 main.go:297] Handling node with IPs: map[192.168.39.168:{}]
	I0217 11:56:20.009426       1 main.go:324] Node ha-783738-m04 has CIDR [10.244.3.0/24] 
	I0217 11:56:30.000205       1 main.go:297] Handling node with IPs: map[192.168.39.168:{}]
	I0217 11:56:30.000320       1 main.go:324] Node ha-783738-m04 has CIDR [10.244.3.0/24] 
	I0217 11:56:30.000673       1 main.go:297] Handling node with IPs: map[192.168.39.249:{}]
	I0217 11:56:30.004120       1 main.go:301] handling current node
	I0217 11:56:30.004403       1 main.go:297] Handling node with IPs: map[192.168.39.31:{}]
	I0217 11:56:30.004484       1 main.go:324] Node ha-783738-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [0d8dd6abc6b0] <==
	W0217 11:58:05.008746       1 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags
	I0217 11:58:05.009254       1 options.go:238] external host was not specified, using 192.168.39.249
	I0217 11:58:05.012100       1 server.go:143] Version: v1.32.1
	I0217 11:58:05.012139       1 server.go:145] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0217 11:58:05.254592       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0217 11:58:05.265931       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0217 11:58:05.302917       1 plugins.go:157] Loaded 13 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I0217 11:58:05.302958       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0217 11:58:05.303380       1 instance.go:233] Using reconciler: lease
	W0217 11:58:25.253372       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0217 11:58:25.253478       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0217 11:58:25.304453       1 instance.go:226] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [2e90f752fdc0] <==
	I0217 11:58:05.575513       1 serving.go:386] Generated self-signed cert in-memory
	I0217 11:58:05.850219       1 controllermanager.go:185] "Starting" version="v1.32.1"
	I0217 11:58:05.850380       1 controllermanager.go:187] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0217 11:58:05.851835       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0217 11:58:05.852508       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0217 11:58:05.852713       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0217 11:58:05.852833       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0217 11:58:26.312388       1 controllermanager.go:230] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.249:8443/healthz\": dial tcp 192.168.39.249:8443: connect: connection refused"
	
	
	==> kube-proxy [5caaef1da414] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0217 11:53:59.616708       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0217 11:53:59.651486       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.249"]
	E0217 11:53:59.651650       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0217 11:53:59.696326       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0217 11:53:59.696377       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0217 11:53:59.696401       1 server_linux.go:170] "Using iptables Proxier"
	I0217 11:53:59.710221       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0217 11:53:59.711347       1 server.go:497] "Version info" version="v1.32.1"
	I0217 11:53:59.711380       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0217 11:53:59.716398       1 config.go:199] "Starting service config controller"
	I0217 11:53:59.717714       1 config.go:105] "Starting endpoint slice config controller"
	I0217 11:53:59.717746       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0217 11:53:59.718142       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0217 11:53:59.718615       1 config.go:329] "Starting node config controller"
	I0217 11:53:59.718758       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0217 11:53:59.817915       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0217 11:53:59.819456       1 shared_informer.go:320] Caches are synced for service config
	I0217 11:53:59.821373       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1395373a3c21] <==
	E0217 11:53:52.919534       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.249:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0217 11:53:53.771964       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.249:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0217 11:53:53.772105       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.249:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0217 11:53:55.316775       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.249:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0217 11:53:55.316841       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.249:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0217 11:53:55.317229       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.249:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0217 11:53:55.317287       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.249:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0217 11:53:55.599247       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.249:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0217 11:53:55.599332       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.249:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0217 11:53:55.855471       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.249:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0217 11:53:55.855524       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.249:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0217 11:53:56.059180       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.249:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0217 11:53:56.059238       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.249:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0217 11:53:59.073926       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0217 11:53:59.074031       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0217 11:53:59.074570       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0217 11:53:59.075126       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0217 11:53:59.075450       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0217 11:53:59.074624       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0217 11:54:13.896773       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0217 11:56:05.957670       1 framework.go:1316] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-58667487b6-v7x5t\": pod busybox-58667487b6-v7x5t is already assigned to node \"ha-783738-m04\"" plugin="DefaultBinder" pod="default/busybox-58667487b6-v7x5t" node="ha-783738-m04"
	E0217 11:56:05.971236       1 schedule_one.go:359] "scheduler cache ForgetPod failed" err="pod c5148a30-9b13-42ed-87c8-723413b074d3(default/busybox-58667487b6-v7x5t) wasn't assumed so cannot be forgotten" pod="default/busybox-58667487b6-v7x5t"
	E0217 11:56:05.971303       1 schedule_one.go:1058] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-58667487b6-v7x5t\": pod busybox-58667487b6-v7x5t is already assigned to node \"ha-783738-m04\"" pod="default/busybox-58667487b6-v7x5t"
	I0217 11:56:05.971509       1 schedule_one.go:1071] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-58667487b6-v7x5t" node="ha-783738-m04"
	E0217 11:56:37.999387       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [d524d25a3256] <==
	E0217 11:58:26.313559       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.249:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.249:37922->192.168.39.249:8443: read: connection reset by peer" logger="UnhandledError"
	W0217 11:58:26.313700       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.249:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.249:37926->192.168.39.249:8443: read: connection reset by peer
	E0217 11:58:26.313773       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.249:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.249:37926->192.168.39.249:8443: read: connection reset by peer" logger="UnhandledError"
	W0217 11:58:26.313906       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.249:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.249:37956->192.168.39.249:8443: read: connection reset by peer
	E0217 11:58:26.313971       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.249:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.249:37956->192.168.39.249:8443: read: connection reset by peer" logger="UnhandledError"
	W0217 11:58:26.314101       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.249:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.249:37960->192.168.39.249:8443: read: connection reset by peer
	E0217 11:58:26.314185       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.249:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.249:37960->192.168.39.249:8443: read: connection reset by peer" logger="UnhandledError"
	W0217 11:58:26.314462       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.249:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.249:37888->192.168.39.249:8443: read: connection reset by peer
	E0217 11:58:26.314547       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.249:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.249:37888->192.168.39.249:8443: read: connection reset by peer" logger="UnhandledError"
	W0217 11:58:26.314713       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.249:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.249:37930->192.168.39.249:8443: read: connection reset by peer
	E0217 11:58:26.314798       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.249:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.249:37930->192.168.39.249:8443: read: connection reset by peer" logger="UnhandledError"
	W0217 11:58:26.314960       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.249:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.249:37948->192.168.39.249:8443: read: connection reset by peer
	E0217 11:58:26.315166       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.249:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.249:37948->192.168.39.249:8443: read: connection reset by peer" logger="UnhandledError"
	W0217 11:58:26.315243       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: Get "https://192.168.39.249:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.249:37940->192.168.39.249:8443: read: connection reset by peer
	E0217 11:58:26.315352       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.249:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.249:37940->192.168.39.249:8443: read: connection reset by peer" logger="UnhandledError"
	W0217 11:58:29.432094       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.249:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0217 11:58:29.432235       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.249:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0217 11:58:32.758441       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.249:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0217 11:58:32.758583       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.249:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0217 11:58:33.069242       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.249:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0217 11:58:33.069380       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.249:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0217 11:58:35.727701       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.249:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0217 11:58:35.727922       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.249:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0217 11:58:36.974377       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.249:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0217 11:58:36.974419       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.249:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	
	
	==> kubelet <==
	Feb 17 11:58:32 ha-783738 kubelet[1591]: E0217 11:58:32.182236    1591 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-783738\" not found" node="ha-783738"
	Feb 17 11:58:32 ha-783738 kubelet[1591]: I0217 11:58:32.182362    1591 scope.go:117] "RemoveContainer" containerID="2e90f752fdc0601abb5401e228fa8355b97462cfd9f4dafb766f56eaf8e7b13a"
	Feb 17 11:58:32 ha-783738 kubelet[1591]: E0217 11:58:32.182489    1591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-783738_kube-system(37cb2af166ca362ca24afd5a80241d47)\"" pod="kube-system/kube-controller-manager-ha-783738" podUID="37cb2af166ca362ca24afd5a80241d47"
	Feb 17 11:58:33 ha-783738 kubelet[1591]: E0217 11:58:33.382650    1591 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.254:8443: connect: no route to host" node="ha-783738"
	Feb 17 11:58:33 ha-783738 kubelet[1591]: E0217 11:58:33.382815    1591 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-783738?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host" interval="7s"
	Feb 17 11:58:33 ha-783738 kubelet[1591]: W0217 11:58:33.382655    1591 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-783738&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	Feb 17 11:58:33 ha-783738 kubelet[1591]: E0217 11:58:33.383127    1591 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-783738&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	Feb 17 11:58:36 ha-783738 kubelet[1591]: E0217 11:58:36.704343    1591 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-783738\" not found"
	Feb 17 11:58:37 ha-783738 kubelet[1591]: E0217 11:58:37.748003    1591 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-783738\" not found" node="ha-783738"
	Feb 17 11:58:39 ha-783738 kubelet[1591]: E0217 11:58:39.526616    1591 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host" event="&Event{ObjectMeta:{ha-783738.1824fce9ab5e06e9  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-783738,UID:ha-783738,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ha-783738,},FirstTimestamp:2025-02-17 11:57:16.604499689 +0000 UTC m=+0.220042798,LastTimestamp:2025-02-17 11:57:16.604499689 +0000 UTC m=+0.220042798,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-783738,}"
	Feb 17 11:58:39 ha-783738 kubelet[1591]: E0217 11:58:39.748034    1591 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-783738\" not found" node="ha-783738"
	Feb 17 11:58:40 ha-783738 kubelet[1591]: I0217 11:58:40.384759    1591 kubelet_node_status.go:76] "Attempting to register node" node="ha-783738"
	Feb 17 11:58:42 ha-783738 kubelet[1591]: E0217 11:58:42.599676    1591 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.254:8443: connect: no route to host" node="ha-783738"
	Feb 17 11:58:42 ha-783738 kubelet[1591]: E0217 11:58:42.599851    1591 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-783738?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host" interval="7s"
	Feb 17 11:58:43 ha-783738 kubelet[1591]: E0217 11:58:43.747946    1591 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-783738\" not found" node="ha-783738"
	Feb 17 11:58:43 ha-783738 kubelet[1591]: I0217 11:58:43.748020    1591 scope.go:117] "RemoveContainer" containerID="0d8dd6abc6b0262f0e2de062685df6bbc87187dd14023d0fd12b894f48bd2001"
	Feb 17 11:58:43 ha-783738 kubelet[1591]: E0217 11:58:43.748145    1591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-783738_kube-system(77f0e47471ffa89381403ccfd101e5e7)\"" pod="kube-system/kube-apiserver-ha-783738" podUID="77f0e47471ffa89381403ccfd101e5e7"
	Feb 17 11:58:44 ha-783738 kubelet[1591]: E0217 11:58:44.748575    1591 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-783738\" not found" node="ha-783738"
	Feb 17 11:58:44 ha-783738 kubelet[1591]: I0217 11:58:44.749252    1591 scope.go:117] "RemoveContainer" containerID="2e90f752fdc0601abb5401e228fa8355b97462cfd9f4dafb766f56eaf8e7b13a"
	Feb 17 11:58:44 ha-783738 kubelet[1591]: E0217 11:58:44.750099    1591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-783738_kube-system(37cb2af166ca362ca24afd5a80241d47)\"" pod="kube-system/kube-controller-manager-ha-783738" podUID="37cb2af166ca362ca24afd5a80241d47"
	Feb 17 11:58:45 ha-783738 kubelet[1591]: W0217 11:58:45.670801    1591 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	Feb 17 11:58:45 ha-783738 kubelet[1591]: E0217 11:58:45.670876    1591 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	Feb 17 11:58:45 ha-783738 kubelet[1591]: W0217 11:58:45.670970    1591 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	Feb 17 11:58:45 ha-783738 kubelet[1591]: E0217 11:58:45.671065    1591 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	Feb 17 11:58:46 ha-783738 kubelet[1591]: E0217 11:58:46.704834    1591 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-783738\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-783738 -n ha-783738
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-783738 -n ha-783738: exit status 2 (227.219087ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-783738" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.75s)

                                                
                                    

Test pass (306/344)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 7.17
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.32.1/json-events 3.51
13 TestDownloadOnly/v1.32.1/preload-exists 0
17 TestDownloadOnly/v1.32.1/LogsDuration 0.07
18 TestDownloadOnly/v1.32.1/DeleteAll 0.15
19 TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.61
22 TestOffline 90.11
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 216.41
29 TestAddons/serial/Volcano 44.58
31 TestAddons/serial/GCPAuth/Namespaces 0.11
32 TestAddons/serial/GCPAuth/FakeCredentials 8.53
35 TestAddons/parallel/Registry 15.82
36 TestAddons/parallel/Ingress 20.91
37 TestAddons/parallel/InspektorGadget 10.69
38 TestAddons/parallel/MetricsServer 6.99
40 TestAddons/parallel/CSI 50.29
41 TestAddons/parallel/Headlamp 20.68
42 TestAddons/parallel/CloudSpanner 5.58
43 TestAddons/parallel/LocalPath 56.25
44 TestAddons/parallel/NvidiaDevicePlugin 5.49
45 TestAddons/parallel/Yakd 11.98
47 TestAddons/StoppedEnableDisable 13.58
48 TestCertOptions 65.27
49 TestCertExpiration 290.46
50 TestDockerFlags 76.92
51 TestForceSystemdFlag 65.64
52 TestForceSystemdEnv 100.03
54 TestKVMDriverInstallOrUpdate 4.21
58 TestErrorSpam/setup 50.24
59 TestErrorSpam/start 0.36
60 TestErrorSpam/status 0.74
61 TestErrorSpam/pause 1.22
62 TestErrorSpam/unpause 1.42
63 TestErrorSpam/stop 15.55
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 86.79
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 40.34
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.08
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.15
75 TestFunctional/serial/CacheCmd/cache/add_local 1.27
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.13
80 TestFunctional/serial/CacheCmd/cache/delete 0.1
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 41.49
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 0.94
86 TestFunctional/serial/LogsFileCmd 1.03
87 TestFunctional/serial/InvalidService 4.23
89 TestFunctional/parallel/ConfigCmd 0.4
90 TestFunctional/parallel/DashboardCmd 21.16
91 TestFunctional/parallel/DryRun 0.31
92 TestFunctional/parallel/InternationalLanguage 0.17
93 TestFunctional/parallel/StatusCmd 1.09
97 TestFunctional/parallel/ServiceCmdConnect 8.6
98 TestFunctional/parallel/AddonsCmd 0.13
99 TestFunctional/parallel/PersistentVolumeClaim 45.34
101 TestFunctional/parallel/SSHCmd 0.41
102 TestFunctional/parallel/CpCmd 1.45
103 TestFunctional/parallel/MySQL 40.56
104 TestFunctional/parallel/FileSync 0.2
105 TestFunctional/parallel/CertSync 1.45
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.22
113 TestFunctional/parallel/License 0.2
114 TestFunctional/parallel/ServiceCmd/DeployApp 12.21
115 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
116 TestFunctional/parallel/MountCmd/any-port 9.64
117 TestFunctional/parallel/ProfileCmd/profile_list 0.36
118 TestFunctional/parallel/ProfileCmd/profile_json_output 0.46
119 TestFunctional/parallel/DockerEnv/bash 0.9
120 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
121 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
122 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
123 TestFunctional/parallel/MountCmd/specific-port 1.7
124 TestFunctional/parallel/MountCmd/VerifyCleanup 1.59
125 TestFunctional/parallel/ServiceCmd/List 0.43
126 TestFunctional/parallel/ServiceCmd/JSONOutput 0.44
127 TestFunctional/parallel/ServiceCmd/HTTPS 0.3
128 TestFunctional/parallel/ServiceCmd/Format 0.29
129 TestFunctional/parallel/ServiceCmd/URL 0.29
139 TestFunctional/parallel/Version/short 0.05
140 TestFunctional/parallel/Version/components 0.51
141 TestFunctional/parallel/ImageCommands/ImageListShort 0.2
142 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
143 TestFunctional/parallel/ImageCommands/ImageListJson 0.2
144 TestFunctional/parallel/ImageCommands/ImageListYaml 0.2
145 TestFunctional/parallel/ImageCommands/ImageBuild 3.51
146 TestFunctional/parallel/ImageCommands/Setup 1.61
147 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.11
148 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.79
149 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.47
150 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.37
151 TestFunctional/parallel/ImageCommands/ImageRemove 1.02
152 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.65
153 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.36
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.01
158 TestGvisorAddon 214.4
161 TestMultiControlPlane/serial/StartCluster 219.89
162 TestMultiControlPlane/serial/DeployApp 5.49
163 TestMultiControlPlane/serial/PingHostFromPods 1.26
164 TestMultiControlPlane/serial/AddWorkerNode 62.14
165 TestMultiControlPlane/serial/NodeLabels 0.07
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.87
167 TestMultiControlPlane/serial/CopyFile 12.98
168 TestMultiControlPlane/serial/StopSecondaryNode 13.92
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.66
170 TestMultiControlPlane/serial/RestartSecondaryNode 42.31
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.84
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 244.82
173 TestMultiControlPlane/serial/DeleteSecondaryNode 6.89
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.62
175 TestMultiControlPlane/serial/StopCluster 37.57
182 TestImageBuild/serial/Setup 50.97
183 TestImageBuild/serial/NormalBuild 1.37
184 TestImageBuild/serial/BuildWithBuildArg 0.86
185 TestImageBuild/serial/BuildWithDockerIgnore 0.58
186 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.82
190 TestJSONOutput/start/Command 88.45
191 TestJSONOutput/start/Audit 0
193 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/pause/Command 0.56
197 TestJSONOutput/pause/Audit 0
199 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/unpause/Command 0.55
203 TestJSONOutput/unpause/Audit 0
205 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
208 TestJSONOutput/stop/Command 7.55
209 TestJSONOutput/stop/Audit 0
211 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
212 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
213 TestErrorJSONOutput 0.21
218 TestMainNoArgs 0.05
219 TestMinikubeProfile 103.98
222 TestMountStart/serial/StartWithMountFirst 31.22
223 TestMountStart/serial/VerifyMountFirst 0.37
224 TestMountStart/serial/StartWithMountSecond 28.19
225 TestMountStart/serial/VerifyMountSecond 0.37
226 TestMountStart/serial/DeleteFirst 0.69
227 TestMountStart/serial/VerifyMountPostDelete 0.38
228 TestMountStart/serial/Stop 2.28
229 TestMountStart/serial/RestartStopped 27.8
230 TestMountStart/serial/VerifyMountPostStop 0.37
233 TestMultiNode/serial/FreshStart2Nodes 133.39
234 TestMultiNode/serial/DeployApp2Nodes 4.2
235 TestMultiNode/serial/PingHostFrom2Pods 0.81
236 TestMultiNode/serial/AddNode 58.25
237 TestMultiNode/serial/MultiNodeLabels 0.06
238 TestMultiNode/serial/ProfileList 0.59
239 TestMultiNode/serial/CopyFile 7.41
240 TestMultiNode/serial/StopNode 3.38
241 TestMultiNode/serial/StartAfterStop 42.09
242 TestMultiNode/serial/RestartKeepsNodes 174.24
243 TestMultiNode/serial/DeleteNode 2.25
244 TestMultiNode/serial/StopMultiNode 25.16
245 TestMultiNode/serial/RestartMultiNode 120.55
246 TestMultiNode/serial/ValidateNameConflict 52.95
251 TestPreload 150.88
253 TestScheduledStopUnix 119.38
254 TestSkaffold 127.66
257 TestRunningBinaryUpgrade 174.36
259 TestKubernetesUpgrade 264.56
263 TestPause/serial/Start 64.09
275 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
276 TestNoKubernetes/serial/StartWithK8s 96.32
277 TestPause/serial/SecondStartNoReconfiguration 78.23
285 TestNoKubernetes/serial/StartWithStopK8s 45.78
286 TestPause/serial/Pause 0.63
287 TestPause/serial/VerifyStatus 0.28
288 TestPause/serial/Unpause 0.76
289 TestPause/serial/PauseAgain 0.97
290 TestPause/serial/DeletePaused 1.92
291 TestNoKubernetes/serial/Start 32.2
292 TestPause/serial/VerifyDeletedResources 4.49
293 TestNoKubernetes/serial/VerifyK8sNotRunning 0.19
294 TestNoKubernetes/serial/ProfileList 0.56
295 TestNoKubernetes/serial/Stop 2.29
296 TestNoKubernetes/serial/StartNoArgs 96.75
297 TestStoppedBinaryUpgrade/Setup 0.43
298 TestStoppedBinaryUpgrade/Upgrade 146.67
299 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
300 TestStoppedBinaryUpgrade/MinikubeLogs 1.06
301 TestNetworkPlugins/group/auto/Start 92.6
302 TestNetworkPlugins/group/kindnet/Start 102.66
303 TestNetworkPlugins/group/calico/Start 120.84
304 TestNetworkPlugins/group/auto/KubeletFlags 0.43
305 TestNetworkPlugins/group/auto/NetCatPod 11.96
306 TestNetworkPlugins/group/auto/DNS 0.16
307 TestNetworkPlugins/group/auto/Localhost 0.15
308 TestNetworkPlugins/group/auto/HairPin 0.13
309 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
310 TestNetworkPlugins/group/kindnet/KubeletFlags 0.28
311 TestNetworkPlugins/group/kindnet/NetCatPod 12.32
312 TestNetworkPlugins/group/custom-flannel/Start 72.37
313 TestNetworkPlugins/group/kindnet/DNS 0.25
314 TestNetworkPlugins/group/kindnet/Localhost 0.17
315 TestNetworkPlugins/group/kindnet/HairPin 0.17
316 TestNetworkPlugins/group/false/Start 95.52
317 TestNetworkPlugins/group/enable-default-cni/Start 110.51
318 TestNetworkPlugins/group/calico/ControllerPod 6.01
319 TestNetworkPlugins/group/calico/KubeletFlags 0.46
320 TestNetworkPlugins/group/calico/NetCatPod 11.23
321 TestNetworkPlugins/group/calico/DNS 0.18
322 TestNetworkPlugins/group/calico/Localhost 0.16
323 TestNetworkPlugins/group/calico/HairPin 0.13
324 TestNetworkPlugins/group/flannel/Start 92.78
325 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.35
326 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.21
327 TestNetworkPlugins/group/custom-flannel/DNS 0.17
328 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
329 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
330 TestNetworkPlugins/group/false/KubeletFlags 0.33
331 TestNetworkPlugins/group/false/NetCatPod 10.29
332 TestNetworkPlugins/group/bridge/Start 111.71
333 TestNetworkPlugins/group/false/DNS 0.15
334 TestNetworkPlugins/group/false/Localhost 0.13
335 TestNetworkPlugins/group/false/HairPin 0.14
336 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.25
337 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.28
338 TestNetworkPlugins/group/kubenet/Start 81.87
339 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
340 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
341 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
343 TestStartStop/group/old-k8s-version/serial/FirstStart 198.31
344 TestNetworkPlugins/group/flannel/ControllerPod 6.01
345 TestNetworkPlugins/group/flannel/KubeletFlags 0.21
346 TestNetworkPlugins/group/flannel/NetCatPod 9.24
347 TestNetworkPlugins/group/flannel/DNS 0.2
348 TestNetworkPlugins/group/flannel/Localhost 0.16
349 TestNetworkPlugins/group/flannel/HairPin 0.15
351 TestStartStop/group/no-preload/serial/FirstStart 83.46
352 TestNetworkPlugins/group/kubenet/KubeletFlags 0.37
353 TestNetworkPlugins/group/kubenet/NetCatPod 11.48
354 TestNetworkPlugins/group/bridge/KubeletFlags 0.24
355 TestNetworkPlugins/group/bridge/NetCatPod 13.26
356 TestNetworkPlugins/group/kubenet/DNS 0.17
357 TestNetworkPlugins/group/kubenet/Localhost 0.14
358 TestNetworkPlugins/group/kubenet/HairPin 0.15
359 TestNetworkPlugins/group/bridge/DNS 0.21
360 TestNetworkPlugins/group/bridge/Localhost 0.16
361 TestNetworkPlugins/group/bridge/HairPin 0.14
363 TestStartStop/group/embed-certs/serial/FirstStart 66.6
365 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 93.84
366 TestStartStop/group/no-preload/serial/DeployApp 12.32
367 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.04
368 TestStartStop/group/no-preload/serial/Stop 13.36
369 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
370 TestStartStop/group/no-preload/serial/SecondStart 294
371 TestStartStop/group/embed-certs/serial/DeployApp 9.32
372 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.99
373 TestStartStop/group/embed-certs/serial/Stop 13.35
374 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
375 TestStartStop/group/embed-certs/serial/SecondStart 297.65
376 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.28
377 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.94
378 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.34
379 TestStartStop/group/old-k8s-version/serial/DeployApp 9.51
380 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
381 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 299.54
382 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.09
383 TestStartStop/group/old-k8s-version/serial/Stop 13.35
384 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
385 TestStartStop/group/old-k8s-version/serial/SecondStart 396.76
386 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 15.01
387 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
388 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.22
389 TestStartStop/group/no-preload/serial/Pause 2.58
391 TestStartStop/group/newest-cni/serial/FirstStart 62.66
392 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
393 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.07
394 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.19
395 TestStartStop/group/embed-certs/serial/Pause 2.41
396 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
397 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
398 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
399 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.43
400 TestStartStop/group/newest-cni/serial/DeployApp 0
401 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.82
402 TestStartStop/group/newest-cni/serial/Stop 12.62
403 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
404 TestStartStop/group/newest-cni/serial/SecondStart 37.53
405 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
406 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
407 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
408 TestStartStop/group/newest-cni/serial/Pause 2.18
409 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
410 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
411 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.19
412 TestStartStop/group/old-k8s-version/serial/Pause 2.24
x
+
TestDownloadOnly/v1.20.0/json-events (7.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-992768 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-992768 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=kvm2 : (7.172972563s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (7.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0217 11:34:52.977453   84502 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0217 11:34:52.977560   84502 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20427-77349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-992768
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-992768: exit status 85 (63.947445ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-992768 | jenkins | v1.35.0 | 17 Feb 25 11:34 UTC |          |
	|         | -p download-only-992768        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/17 11:34:45
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0217 11:34:45.845589   84513 out.go:345] Setting OutFile to fd 1 ...
	I0217 11:34:45.845859   84513 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 11:34:45.845870   84513 out.go:358] Setting ErrFile to fd 2...
	I0217 11:34:45.845877   84513 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 11:34:45.846065   84513 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20427-77349/.minikube/bin
	W0217 11:34:45.846206   84513 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20427-77349/.minikube/config/config.json: open /home/jenkins/minikube-integration/20427-77349/.minikube/config/config.json: no such file or directory
	I0217 11:34:45.846835   84513 out.go:352] Setting JSON to true
	I0217 11:34:45.848389   84513 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":4634,"bootTime":1739787452,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0217 11:34:45.848582   84513 start.go:139] virtualization: kvm guest
	I0217 11:34:45.851210   84513 out.go:97] [download-only-992768] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0217 11:34:45.851365   84513 notify.go:220] Checking for updates...
	W0217 11:34:45.851331   84513 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20427-77349/.minikube/cache/preloaded-tarball: no such file or directory
	I0217 11:34:45.852828   84513 out.go:169] MINIKUBE_LOCATION=20427
	I0217 11:34:45.854332   84513 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0217 11:34:45.855546   84513 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20427-77349/kubeconfig
	I0217 11:34:45.856732   84513 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20427-77349/.minikube
	I0217 11:34:45.857831   84513 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0217 11:34:45.859804   84513 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0217 11:34:45.860028   84513 driver.go:394] Setting default libvirt URI to qemu:///system
	I0217 11:34:45.895713   84513 out.go:97] Using the kvm2 driver based on user configuration
	I0217 11:34:45.895746   84513 start.go:297] selected driver: kvm2
	I0217 11:34:45.895752   84513 start.go:901] validating driver "kvm2" against <nil>
	I0217 11:34:45.896063   84513 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0217 11:34:45.896156   84513 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20427-77349/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0217 11:34:45.912549   84513 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0217 11:34:45.912606   84513 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0217 11:34:45.913151   84513 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0217 11:34:45.913312   84513 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0217 11:34:45.913351   84513 cni.go:84] Creating CNI manager for ""
	I0217 11:34:45.913420   84513 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0217 11:34:45.913481   84513 start.go:340] cluster config:
	{Name:download-only-992768 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-992768 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0217 11:34:45.913654   84513 iso.go:125] acquiring lock: {Name:mk4380b7bda8fcd8bced9705ff1695c3fb7dac0d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0217 11:34:45.915566   84513 out.go:97] Downloading VM boot image ...
	I0217 11:34:45.915596   84513 download.go:108] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/20427-77349/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0217 11:34:48.621169   84513 out.go:97] Starting "download-only-992768" primary control-plane node in "download-only-992768" cluster
	I0217 11:34:48.621193   84513 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0217 11:34:48.643259   84513 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0217 11:34:48.643299   84513 cache.go:56] Caching tarball of preloaded images
	I0217 11:34:48.643459   84513 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0217 11:34:48.645109   84513 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0217 11:34:48.645147   84513 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0217 11:34:48.671419   84513 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /home/jenkins/minikube-integration/20427-77349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-992768 host does not exist
	  To start a cluster, run: "minikube start -p download-only-992768"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-992768
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/json-events (3.51s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-417699 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-417699 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=docker --driver=kvm2 : (3.510148179s)
--- PASS: TestDownloadOnly/v1.32.1/json-events (3.51s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/preload-exists
I0217 11:34:56.826932   84502 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
I0217 11:34:56.826984   84502 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20427-77349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-417699
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-417699: exit status 85 (67.332368ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-992768 | jenkins | v1.35.0 | 17 Feb 25 11:34 UTC |                     |
	|         | -p download-only-992768        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 17 Feb 25 11:34 UTC | 17 Feb 25 11:34 UTC |
	| delete  | -p download-only-992768        | download-only-992768 | jenkins | v1.35.0 | 17 Feb 25 11:34 UTC | 17 Feb 25 11:34 UTC |
	| start   | -o=json --download-only        | download-only-417699 | jenkins | v1.35.0 | 17 Feb 25 11:34 UTC |                     |
	|         | -p download-only-417699        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/17 11:34:53
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0217 11:34:53.358000   84703 out.go:345] Setting OutFile to fd 1 ...
	I0217 11:34:53.358132   84703 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 11:34:53.358144   84703 out.go:358] Setting ErrFile to fd 2...
	I0217 11:34:53.358151   84703 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 11:34:53.358359   84703 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20427-77349/.minikube/bin
	I0217 11:34:53.358941   84703 out.go:352] Setting JSON to true
	I0217 11:34:53.359828   84703 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":4641,"bootTime":1739787452,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0217 11:34:53.359931   84703 start.go:139] virtualization: kvm guest
	I0217 11:34:53.362388   84703 out.go:97] [download-only-417699] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0217 11:34:53.362565   84703 notify.go:220] Checking for updates...
	I0217 11:34:53.364161   84703 out.go:169] MINIKUBE_LOCATION=20427
	I0217 11:34:53.365711   84703 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0217 11:34:53.367328   84703 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20427-77349/kubeconfig
	I0217 11:34:53.368803   84703 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20427-77349/.minikube
	I0217 11:34:53.370197   84703 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-417699 host does not exist
	  To start a cluster, run: "minikube start -p download-only-417699"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.32.1/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-417699
--- PASS: TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
I0217 11:34:57.437581   84502 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-853281 --alsologtostderr --binary-mirror http://127.0.0.1:38865 --driver=kvm2 
helpers_test.go:175: Cleaning up "binary-mirror-853281" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-853281
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestOffline (90.11s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-658129 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-658129 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 : (1m29.09985789s)
helpers_test.go:175: Cleaning up "offline-docker-658129" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-658129
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-658129: (1.007597796s)
--- PASS: TestOffline (90.11s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-603759
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-603759: exit status 85 (56.590376ms)

                                                
                                                
-- stdout --
	* Profile "addons-603759" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-603759"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-603759
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-603759: exit status 85 (56.171898ms)

                                                
                                                
-- stdout --
	* Profile "addons-603759" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-603759"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (216.41s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-603759 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-603759 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m36.413358486s)
--- PASS: TestAddons/Setup (216.41s)

                                                
                                    
x
+
TestAddons/serial/Volcano (44.58s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:823: volcano-controller stabilized in 14.598732ms
addons_test.go:815: volcano-admission stabilized in 14.694727ms
addons_test.go:807: volcano-scheduler stabilized in 15.039893ms
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-75fdd99bcf-4ngl8" [2ae0dba8-c64a-48e3-a5d3-ddc09b59248a] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.00355187s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-75d8f6b5c-jbl47" [2c9b9cab-ec23-4bdb-8be3-1f613d9c9efd] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.00394555s
addons_test.go:837: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-86bdc5c9c-tpx8h" [482b6fd6-0e37-4520-a1a6-fdeaa8725b3b] Running
addons_test.go:837: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.00459969s
addons_test.go:842: (dbg) Run:  kubectl --context addons-603759 delete -n volcano-system job volcano-admission-init
addons_test.go:848: (dbg) Run:  kubectl --context addons-603759 create -f testdata/vcjob.yaml
addons_test.go:856: (dbg) Run:  kubectl --context addons-603759 get vcjob -n my-volcano
addons_test.go:874: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [b326fa3e-13f8-4f11-8cda-5d4593f318e4] Pending
helpers_test.go:344: "test-job-nginx-0" [b326fa3e-13f8-4f11-8cda-5d4593f318e4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [b326fa3e-13f8-4f11-8cda-5d4593f318e4] Running
addons_test.go:874: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 17.003709989s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-603759 addons disable volcano --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-603759 addons disable volcano --alsologtostderr -v=1: (11.174905316s)
--- PASS: TestAddons/serial/Volcano (44.58s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-603759 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-603759 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.53s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-603759 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-603759 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f138512a-d5e2-4ce0-9c0b-8d40d401d561] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f138512a-d5e2-4ce0-9c0b-8d40d401d561] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.00394974s
addons_test.go:633: (dbg) Run:  kubectl --context addons-603759 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-603759 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-603759 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.53s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 8.249339ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-2gvgb" [3a3c2eae-ad61-46c4-835d-c72533de21b1] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004480674s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-rcm7d" [b16c0a9e-56d0-413b-a303-ee6a0ad4a171] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.084941826s
addons_test.go:331: (dbg) Run:  kubectl --context addons-603759 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-603759 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-603759 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.822111656s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-603759 ip
2025/02/17 11:39:51 [DEBUG] GET http://192.168.39.9:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-603759 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.82s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-603759 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-603759 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-603759 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [03b14460-0977-445d-88b8-d92eb6126ba1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [03b14460-0977-445d-88b8-d92eb6126ba1] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.003907563s
I0217 11:40:14.676506   84502 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-603759 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-603759 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-603759 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.9
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-603759 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-603759 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-603759 addons disable ingress --alsologtostderr -v=1: (7.731248399s)
--- PASS: TestAddons/parallel/Ingress (20.91s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.69s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-4bjl6" [c8f803ae-9918-4bbe-b672-e39e01f56dd7] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003795756s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-603759 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-603759 addons disable inspektor-gadget --alsologtostderr -v=1: (5.687644619s)
--- PASS: TestAddons/parallel/InspektorGadget (10.69s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.99s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 7.974056ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-gphrl" [4970e471-b48c-4a1d-8605-13b8da7b8eb6] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.005845534s
addons_test.go:402: (dbg) Run:  kubectl --context addons-603759 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-603759 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.99s)

                                                
                                    
x
+
TestAddons/parallel/CSI (50.29s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0217 11:39:49.209031   84502 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0217 11:39:49.214621   84502 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0217 11:39:49.214643   84502 kapi.go:107] duration metric: took 5.620269ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 5.630768ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-603759 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603759 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603759 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603759 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603759 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603759 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603759 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603759 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603759 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603759 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603759 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603759 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-603759 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [8ebecb54-c81f-4109-ab8b-8db28e1cc1d0] Pending
helpers_test.go:344: "task-pv-pod" [8ebecb54-c81f-4109-ab8b-8db28e1cc1d0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [8ebecb54-c81f-4109-ab8b-8db28e1cc1d0] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.003581985s
addons_test.go:511: (dbg) Run:  kubectl --context addons-603759 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-603759 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-603759 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-603759 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-603759 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-603759 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603759 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603759 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603759 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603759 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603759 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603759 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603759 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603759 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603759 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603759 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603759 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603759 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603759 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603759 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603759 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-603759 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [f531c10a-9788-4c10-94f0-2a3dda829889] Pending
helpers_test.go:344: "task-pv-pod-restore" [f531c10a-9788-4c10-94f0-2a3dda829889] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [f531c10a-9788-4c10-94f0-2a3dda829889] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003934108s
addons_test.go:553: (dbg) Run:  kubectl --context addons-603759 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-603759 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-603759 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-603759 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-603759 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-603759 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.697176506s)
--- PASS: TestAddons/parallel/CSI (50.29s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (20.68s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-603759 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5d4b5d7bd6-wfxl8" [387cb2c4-f758-424e-9660-31c10476c39d] Pending
helpers_test.go:344: "headlamp-5d4b5d7bd6-wfxl8" [387cb2c4-f758-424e-9660-31c10476c39d] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-wfxl8" [387cb2c4-f758-424e-9660-31c10476c39d] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.003706955s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-603759 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-603759 addons disable headlamp --alsologtostderr -v=1: (5.848718336s)
--- PASS: TestAddons/parallel/Headlamp (20.68s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5d76cffbc-s6llp" [9429d3d0-8053-43a1-b5b3-74f7a4c0c1de] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003703791s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-603759 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.58s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (56.25s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-603759 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-603759 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603759 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603759 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603759 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603759 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603759 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603759 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603759 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [81cad818-e5df-4eec-af51-d3e1ecfa9437] Pending
helpers_test.go:344: "test-local-path" [81cad818-e5df-4eec-af51-d3e1ecfa9437] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [81cad818-e5df-4eec-af51-d3e1ecfa9437] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [81cad818-e5df-4eec-af51-d3e1ecfa9437] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.003622339s
addons_test.go:906: (dbg) Run:  kubectl --context addons-603759 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-603759 ssh "cat /opt/local-path-provisioner/pvc-30b2c27f-f58d-45fe-a8f1-a9f5d48a61ba_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-603759 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-603759 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-603759 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-603759 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.4125084s)
--- PASS: TestAddons/parallel/LocalPath (56.25s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.49s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-cbc8b" [4cc4fa4f-177a-468d-9fd8-10131cd25c15] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003822202s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-603759 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.49s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.98s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-q4nxm" [a4f8fd0e-5433-485d-aec4-7cf74c5b21c3] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.002967062s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-603759 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-603759 addons disable yakd --alsologtostderr -v=1: (5.978017157s)
--- PASS: TestAddons/parallel/Yakd (11.98s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.58s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-603759
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-603759: (13.30014608s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-603759
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-603759
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-603759
--- PASS: TestAddons/StoppedEnableDisable (13.58s)

                                                
                                    
x
+
TestCertOptions (65.27s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-213091 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-213091 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (1m3.553173199s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-213091 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-213091 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-213091 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-213091" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-213091
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-213091: (1.155709266s)
--- PASS: TestCertOptions (65.27s)

                                                
                                    
x
+
TestCertExpiration (290.46s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-010931 --memory=2048 --cert-expiration=3m --driver=kvm2 
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-010931 --memory=2048 --cert-expiration=3m --driver=kvm2 : (1m16.695698609s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-010931 --memory=2048 --cert-expiration=8760h --driver=kvm2 
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-010931 --memory=2048 --cert-expiration=8760h --driver=kvm2 : (32.708756502s)
helpers_test.go:175: Cleaning up "cert-expiration-010931" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-010931
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-010931: (1.058745241s)
--- PASS: TestCertExpiration (290.46s)

                                                
                                    
x
+
TestDockerFlags (76.92s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-863738 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 
E0217 12:27:55.470675   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/skaffold-154700/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-863738 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (1m15.38225361s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-863738 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-863738 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-863738" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-863738
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-863738: (1.075310304s)
--- PASS: TestDockerFlags (76.92s)

                                                
                                    
x
+
TestForceSystemdFlag (65.64s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-664227 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-664227 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (1m4.165037774s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-664227 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-664227" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-664227
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-664227: (1.152788838s)
--- PASS: TestForceSystemdFlag (65.64s)

                                                
                                    
x
+
TestForceSystemdEnv (100.03s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-953378 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-953378 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 : (1m38.607981739s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-953378 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-953378" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-953378
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-953378: (1.091893797s)
--- PASS: TestForceSystemdEnv (100.03s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.21s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0217 12:24:21.258527   84502 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0217 12:24:21.258722   84502 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0217 12:24:21.290106   84502 install.go:62] docker-machine-driver-kvm2: exit status 1
W0217 12:24:21.290491   84502 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0217 12:24:21.290573   84502 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1886896392/001/docker-machine-driver-kvm2
I0217 12:24:21.528117   84502 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1886896392/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5494840 0x5494840 0x5494840 0x5494840 0x5494840 0x5494840 0x5494840] Decompressors:map[bz2:0xc00088fb78 gz:0xc00088fc00 tar:0xc00088fbb0 tar.bz2:0xc00088fbc0 tar.gz:0xc00088fbd0 tar.xz:0xc00088fbe0 tar.zst:0xc00088fbf0 tbz2:0xc00088fbc0 tgz:0xc00088fbd0 txz:0xc00088fbe0 tzst:0xc00088fbf0 xz:0xc00088fc08 zip:0xc00088fc10 zst:0xc00088fc20] Getters:map[file:0xc001616ba0 http:0xc00089c410 https:0xc00089c460] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0217 12:24:21.528171   84502 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1886896392/001/docker-machine-driver-kvm2
I0217 12:24:23.717997   84502 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0217 12:24:23.718096   84502 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0217 12:24:23.757160   84502 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0217 12:24:23.757202   84502 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0217 12:24:23.757274   84502 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0217 12:24:23.757325   84502 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1886896392/002/docker-machine-driver-kvm2
I0217 12:24:23.814573   84502 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1886896392/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5494840 0x5494840 0x5494840 0x5494840 0x5494840 0x5494840 0x5494840] Decompressors:map[bz2:0xc00088fb78 gz:0xc00088fc00 tar:0xc00088fbb0 tar.bz2:0xc00088fbc0 tar.gz:0xc00088fbd0 tar.xz:0xc00088fbe0 tar.zst:0xc00088fbf0 tbz2:0xc00088fbc0 tgz:0xc00088fbd0 txz:0xc00088fbe0 tzst:0xc00088fbf0 xz:0xc00088fc08 zip:0xc00088fc10 zst:0xc00088fc20] Getters:map[file:0xc000715920 http:0xc0005b8d70 https:0xc0005b8dc0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0217 12:24:23.814626   84502 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1886896392/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (4.21s)

                                                
                                    
x
+
TestErrorSpam/setup (50.24s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-158798 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-158798 --driver=kvm2 
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-158798 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-158798 --driver=kvm2 : (50.23815432s)
--- PASS: TestErrorSpam/setup (50.24s)

                                                
                                    
x
+
TestErrorSpam/start (0.36s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-158798 --log_dir /tmp/nospam-158798 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-158798 --log_dir /tmp/nospam-158798 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-158798 --log_dir /tmp/nospam-158798 start --dry-run
--- PASS: TestErrorSpam/start (0.36s)

                                                
                                    
x
+
TestErrorSpam/status (0.74s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-158798 --log_dir /tmp/nospam-158798 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-158798 --log_dir /tmp/nospam-158798 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-158798 --log_dir /tmp/nospam-158798 status
--- PASS: TestErrorSpam/status (0.74s)

                                                
                                    
x
+
TestErrorSpam/pause (1.22s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-158798 --log_dir /tmp/nospam-158798 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-158798 --log_dir /tmp/nospam-158798 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-158798 --log_dir /tmp/nospam-158798 pause
--- PASS: TestErrorSpam/pause (1.22s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.42s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-158798 --log_dir /tmp/nospam-158798 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-158798 --log_dir /tmp/nospam-158798 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-158798 --log_dir /tmp/nospam-158798 unpause
--- PASS: TestErrorSpam/unpause (1.42s)

                                                
                                    
x
+
TestErrorSpam/stop (15.55s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-158798 --log_dir /tmp/nospam-158798 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-158798 --log_dir /tmp/nospam-158798 stop: (12.534600377s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-158798 --log_dir /tmp/nospam-158798 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-158798 --log_dir /tmp/nospam-158798 stop: (1.881783623s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-158798 --log_dir /tmp/nospam-158798 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-158798 --log_dir /tmp/nospam-158798 stop: (1.135980208s)
--- PASS: TestErrorSpam/stop (15.55s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1872: local sync path: /home/jenkins/minikube-integration/20427-77349/.minikube/files/etc/test/nested/copy/84502/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (86.79s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2251: (dbg) Run:  out/minikube-linux-amd64 start -p functional-576160 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 
functional_test.go:2251: (dbg) Done: out/minikube-linux-amd64 start -p functional-576160 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 : (1m26.790075376s)
--- PASS: TestFunctional/serial/StartWithProxy (86.79s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (40.34s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0217 11:43:30.600641   84502 config.go:182] Loaded profile config "functional-576160": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
functional_test.go:676: (dbg) Run:  out/minikube-linux-amd64 start -p functional-576160 --alsologtostderr -v=8
E0217 11:43:34.519801   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/addons-603759/client.crt: no such file or directory" logger="UnhandledError"
E0217 11:43:34.526324   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/addons-603759/client.crt: no such file or directory" logger="UnhandledError"
E0217 11:43:34.537829   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/addons-603759/client.crt: no such file or directory" logger="UnhandledError"
E0217 11:43:34.559292   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/addons-603759/client.crt: no such file or directory" logger="UnhandledError"
E0217 11:43:34.600846   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/addons-603759/client.crt: no such file or directory" logger="UnhandledError"
E0217 11:43:34.682408   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/addons-603759/client.crt: no such file or directory" logger="UnhandledError"
E0217 11:43:34.844051   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/addons-603759/client.crt: no such file or directory" logger="UnhandledError"
E0217 11:43:35.165881   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/addons-603759/client.crt: no such file or directory" logger="UnhandledError"
E0217 11:43:35.807943   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/addons-603759/client.crt: no such file or directory" logger="UnhandledError"
E0217 11:43:37.089360   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/addons-603759/client.crt: no such file or directory" logger="UnhandledError"
E0217 11:43:39.652264   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/addons-603759/client.crt: no such file or directory" logger="UnhandledError"
E0217 11:43:44.774645   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/addons-603759/client.crt: no such file or directory" logger="UnhandledError"
E0217 11:43:55.016150   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/addons-603759/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:676: (dbg) Done: out/minikube-linux-amd64 start -p functional-576160 --alsologtostderr -v=8: (40.338243377s)
functional_test.go:680: soft start took 40.339011706s for "functional-576160" cluster.
I0217 11:44:10.939249   84502 config.go:182] Loaded profile config "functional-576160": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/SoftStart (40.34s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:698: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:713: (dbg) Run:  kubectl --context functional-576160 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 cache add registry.k8s.io/pause:3.1
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 cache add registry.k8s.io/pause:3.3
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1094: (dbg) Run:  docker build -t minikube-local-cache-test:functional-576160 /tmp/TestFunctionalserialCacheCmdcacheadd_local2255606490/001
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 cache add minikube-local-cache-test:functional-576160
functional_test.go:1111: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 cache delete minikube-local-cache-test:functional-576160
functional_test.go:1100: (dbg) Run:  docker rmi minikube-local-cache-test:functional-576160
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1119: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1127: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1141: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1164: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-576160 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (213.833702ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1175: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 cache reload
E0217 11:44:15.498243   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/addons-603759/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1180: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:733: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 kubectl -- --context functional-576160 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:758: (dbg) Run:  out/kubectl --context functional-576160 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.49s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:774: (dbg) Run:  out/minikube-linux-amd64 start -p functional-576160 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0217 11:44:56.460159   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/addons-603759/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:774: (dbg) Done: out/minikube-linux-amd64 start -p functional-576160 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.49201498s)
functional_test.go:778: restart took 41.492136276s for "functional-576160" cluster.
I0217 11:44:57.742853   84502 config.go:182] Loaded profile config "functional-576160": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/ExtraConfig (41.49s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:827: (dbg) Run:  kubectl --context functional-576160 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:842: etcd phase: Running
functional_test.go:852: etcd status: Ready
functional_test.go:842: kube-apiserver phase: Running
functional_test.go:852: kube-apiserver status: Ready
functional_test.go:842: kube-controller-manager phase: Running
functional_test.go:852: kube-controller-manager status: Ready
functional_test.go:842: kube-scheduler phase: Running
functional_test.go:852: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.94s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1253: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 logs
--- PASS: TestFunctional/serial/LogsCmd (0.94s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.03s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1267: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 logs --file /tmp/TestFunctionalserialLogsFileCmd399563036/001/logs.txt
functional_test.go:1267: (dbg) Done: out/minikube-linux-amd64 -p functional-576160 logs --file /tmp/TestFunctionalserialLogsFileCmd399563036/001/logs.txt: (1.024779257s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.03s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.23s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2338: (dbg) Run:  kubectl --context functional-576160 apply -f testdata/invalidsvc.yaml
functional_test.go:2352: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-576160
functional_test.go:2352: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-576160: exit status 115 (271.187146ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.213:30650 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2344: (dbg) Run:  kubectl --context functional-576160 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.23s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-576160 config get cpus: exit status 14 (66.423588ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 config set cpus 2
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 config get cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-576160 config get cpus: exit status 14 (67.615683ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (21.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:922: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-576160 --alsologtostderr -v=1]
functional_test.go:927: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-576160 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 91620: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (21.16s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-576160 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:991: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-576160 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (151.416973ms)

                                                
                                                
-- stdout --
	* [functional-576160] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20427
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20427-77349/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20427-77349/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0217 11:45:05.699640   91222 out.go:345] Setting OutFile to fd 1 ...
	I0217 11:45:05.699760   91222 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 11:45:05.699770   91222 out.go:358] Setting ErrFile to fd 2...
	I0217 11:45:05.699774   91222 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 11:45:05.700057   91222 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20427-77349/.minikube/bin
	I0217 11:45:05.700999   91222 out.go:352] Setting JSON to false
	I0217 11:45:05.702065   91222 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":5254,"bootTime":1739787452,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0217 11:45:05.702170   91222 start.go:139] virtualization: kvm guest
	I0217 11:45:05.704238   91222 out.go:177] * [functional-576160] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0217 11:45:05.705578   91222 notify.go:220] Checking for updates...
	I0217 11:45:05.705605   91222 out.go:177]   - MINIKUBE_LOCATION=20427
	I0217 11:45:05.706948   91222 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0217 11:45:05.708293   91222 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20427-77349/kubeconfig
	I0217 11:45:05.709663   91222 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20427-77349/.minikube
	I0217 11:45:05.710923   91222 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0217 11:45:05.712166   91222 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0217 11:45:05.713969   91222 config.go:182] Loaded profile config "functional-576160": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0217 11:45:05.714407   91222 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0217 11:45:05.714481   91222 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0217 11:45:05.730351   91222 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46011
	I0217 11:45:05.730824   91222 main.go:141] libmachine: () Calling .GetVersion
	I0217 11:45:05.731447   91222 main.go:141] libmachine: Using API Version  1
	I0217 11:45:05.731476   91222 main.go:141] libmachine: () Calling .SetConfigRaw
	I0217 11:45:05.731803   91222 main.go:141] libmachine: () Calling .GetMachineName
	I0217 11:45:05.732019   91222 main.go:141] libmachine: (functional-576160) Calling .DriverName
	I0217 11:45:05.732279   91222 driver.go:394] Setting default libvirt URI to qemu:///system
	I0217 11:45:05.732618   91222 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0217 11:45:05.732674   91222 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0217 11:45:05.749736   91222 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44509
	I0217 11:45:05.750281   91222 main.go:141] libmachine: () Calling .GetVersion
	I0217 11:45:05.750965   91222 main.go:141] libmachine: Using API Version  1
	I0217 11:45:05.751003   91222 main.go:141] libmachine: () Calling .SetConfigRaw
	I0217 11:45:05.751326   91222 main.go:141] libmachine: () Calling .GetMachineName
	I0217 11:45:05.751520   91222 main.go:141] libmachine: (functional-576160) Calling .DriverName
	I0217 11:45:05.787333   91222 out.go:177] * Using the kvm2 driver based on existing profile
	I0217 11:45:05.788682   91222 start.go:297] selected driver: kvm2
	I0217 11:45:05.788700   91222 start.go:901] validating driver "kvm2" against &{Name:functional-576160 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterNa
me:functional-576160 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.213 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0217 11:45:05.788837   91222 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0217 11:45:05.791204   91222 out.go:201] 
	W0217 11:45:05.792544   91222 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0217 11:45:05.793784   91222 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:1008: (dbg) Run:  out/minikube-linux-amd64 start -p functional-576160 --dry-run --alsologtostderr -v=1 --driver=kvm2 
--- PASS: TestFunctional/parallel/DryRun (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 start -p functional-576160 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-576160 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (171.926072ms)

                                                
                                                
-- stdout --
	* [functional-576160] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20427
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20427-77349/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20427-77349/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0217 11:45:05.542052   91154 out.go:345] Setting OutFile to fd 1 ...
	I0217 11:45:05.542309   91154 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 11:45:05.542346   91154 out.go:358] Setting ErrFile to fd 2...
	I0217 11:45:05.542365   91154 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 11:45:05.543315   91154 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20427-77349/.minikube/bin
	I0217 11:45:05.544155   91154 out.go:352] Setting JSON to false
	I0217 11:45:05.545706   91154 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":5253,"bootTime":1739787452,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0217 11:45:05.545854   91154 start.go:139] virtualization: kvm guest
	I0217 11:45:05.548312   91154 out.go:177] * [functional-576160] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0217 11:45:05.549762   91154 out.go:177]   - MINIKUBE_LOCATION=20427
	I0217 11:45:05.549761   91154 notify.go:220] Checking for updates...
	I0217 11:45:05.552410   91154 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0217 11:45:05.553650   91154 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20427-77349/kubeconfig
	I0217 11:45:05.555491   91154 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20427-77349/.minikube
	I0217 11:45:05.556939   91154 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0217 11:45:05.560059   91154 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0217 11:45:05.562206   91154 config.go:182] Loaded profile config "functional-576160": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0217 11:45:05.562843   91154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0217 11:45:05.562914   91154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0217 11:45:05.581504   91154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41877
	I0217 11:45:05.581873   91154 main.go:141] libmachine: () Calling .GetVersion
	I0217 11:45:05.582562   91154 main.go:141] libmachine: Using API Version  1
	I0217 11:45:05.582595   91154 main.go:141] libmachine: () Calling .SetConfigRaw
	I0217 11:45:05.582988   91154 main.go:141] libmachine: () Calling .GetMachineName
	I0217 11:45:05.583148   91154 main.go:141] libmachine: (functional-576160) Calling .DriverName
	I0217 11:45:05.583405   91154 driver.go:394] Setting default libvirt URI to qemu:///system
	I0217 11:45:05.583821   91154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0217 11:45:05.583871   91154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0217 11:45:05.600889   91154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46435
	I0217 11:45:05.601389   91154 main.go:141] libmachine: () Calling .GetVersion
	I0217 11:45:05.601916   91154 main.go:141] libmachine: Using API Version  1
	I0217 11:45:05.601935   91154 main.go:141] libmachine: () Calling .SetConfigRaw
	I0217 11:45:05.602278   91154 main.go:141] libmachine: () Calling .GetMachineName
	I0217 11:45:05.602513   91154 main.go:141] libmachine: (functional-576160) Calling .DriverName
	I0217 11:45:05.636815   91154 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0217 11:45:05.638154   91154 start.go:297] selected driver: kvm2
	I0217 11:45:05.638175   91154 start.go:901] validating driver "kvm2" against &{Name:functional-576160 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterNa
me:functional-576160 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.213 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0217 11:45:05.638314   91154 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0217 11:45:05.640971   91154 out.go:201] 
	W0217 11:45:05.642305   91154 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0217 11:45:05.643534   91154 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:871: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 status
functional_test.go:877: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:889: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1646: (dbg) Run:  kubectl --context functional-576160 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1652: (dbg) Run:  kubectl --context functional-576160 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-9p2jw" [b6e83d60-ef1c-4f37-8239-2f1c1c689c99] Pending
2025/02/17 11:45:26 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:344: "hello-node-connect-58f9cf68d8-9p2jw" [b6e83d60-ef1c-4f37-8239-2f1c1c689c99] Running
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.053977989s
functional_test.go:1666: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 service hello-node-connect --url
functional_test.go:1672: found endpoint for hello-node-connect: http://192.168.39.213:31678
functional_test.go:1692: http://192.168.39.213:31678: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-9p2jw

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.213:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.213:31678
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.60s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 addons list
functional_test.go:1719: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (45.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [9839f121-0c35-4ce9-81f8-faf1d8fa8c21] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003659953s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-576160 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-576160 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-576160 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-576160 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [a37acdc9-b983-4a7d-a37b-dbc9d73715ce] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [a37acdc9-b983-4a7d-a37b-dbc9d73715ce] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 27.003254877s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-576160 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-576160 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-576160 delete -f testdata/storage-provisioner/pod.yaml: (1.52975431s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-576160 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ecdc7fb6-a0d5-4dfa-9446-e0ae4b4f0b01] Pending
helpers_test.go:344: "sp-pod" [ecdc7fb6-a0d5-4dfa-9446-e0ae4b4f0b01] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ecdc7fb6-a0d5-4dfa-9446-e0ae4b4f0b01] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.00508271s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-576160 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (45.34s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 ssh "echo hello"
functional_test.go:1759: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 ssh -n functional-576160 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 cp functional-576160:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3020034303/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 ssh -n functional-576160 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 ssh -n functional-576160 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (40.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1810: (dbg) Run:  kubectl --context functional-576160 replace --force -f testdata/mysql.yaml
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-rlfnw" [334253b1-da6b-42b0-8a77-41cd1e8dd08e] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-58ccfd96bb-rlfnw" [334253b1-da6b-42b0-8a77-41cd1e8dd08e] Running
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 37.004612129s
functional_test.go:1824: (dbg) Run:  kubectl --context functional-576160 exec mysql-58ccfd96bb-rlfnw -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-576160 exec mysql-58ccfd96bb-rlfnw -- mysql -ppassword -e "show databases;": exit status 1 (145.081788ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0217 11:45:55.377119   84502 retry.go:31] will retry after 510.712452ms: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-576160 exec mysql-58ccfd96bb-rlfnw -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-576160 exec mysql-58ccfd96bb-rlfnw -- mysql -ppassword -e "show databases;": exit status 1 (150.025039ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0217 11:45:56.039072   84502 retry.go:31] will retry after 985.642863ms: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-576160 exec mysql-58ccfd96bb-rlfnw -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-576160 exec mysql-58ccfd96bb-rlfnw -- mysql -ppassword -e "show databases;": exit status 1 (125.748328ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0217 11:45:57.151037   84502 retry.go:31] will retry after 1.301675568s: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-576160 exec mysql-58ccfd96bb-rlfnw -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (40.56s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1946: Checking for existence of /etc/test/nested/copy/84502/hosts within VM
functional_test.go:1948: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 ssh "sudo cat /etc/test/nested/copy/84502/hosts"
functional_test.go:1953: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1989: Checking for existence of /etc/ssl/certs/84502.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 ssh "sudo cat /etc/ssl/certs/84502.pem"
functional_test.go:1989: Checking for existence of /usr/share/ca-certificates/84502.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 ssh "sudo cat /usr/share/ca-certificates/84502.pem"
functional_test.go:1989: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/845022.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 ssh "sudo cat /etc/ssl/certs/845022.pem"
functional_test.go:2016: Checking for existence of /usr/share/ca-certificates/845022.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 ssh "sudo cat /usr/share/ca-certificates/845022.pem"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:236: (dbg) Run:  kubectl --context functional-576160 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 ssh "sudo systemctl is-active crio"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-576160 ssh "sudo systemctl is-active crio": exit status 1 (217.594734ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2305: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-576160 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1462: (dbg) Run:  kubectl --context functional-576160 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-xxnll" [f50ec8aa-a0ef-4a8d-af6b-b3b40e5f2156] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-fcfd88b6f-xxnll" [f50ec8aa-a0ef-4a8d-af6b-b3b40e5f2156] Running
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.004509532s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1287: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1292: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-576160 /tmp/TestFunctionalparallelMountCmdany-port4188957442/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1739792704416168924" to /tmp/TestFunctionalparallelMountCmdany-port4188957442/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1739792704416168924" to /tmp/TestFunctionalparallelMountCmdany-port4188957442/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1739792704416168924" to /tmp/TestFunctionalparallelMountCmdany-port4188957442/001/test-1739792704416168924
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-576160 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (270.650151ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0217 11:45:04.687137   84502 retry.go:31] will retry after 480.941223ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Feb 17 11:45 created-by-test
-rw-r--r-- 1 docker docker 24 Feb 17 11:45 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Feb 17 11:45 test-1739792704416168924
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 ssh cat /mount-9p/test-1739792704416168924
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-576160 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [0cd991e9-8bb7-41b4-94a1-77db7315b608] Pending
helpers_test.go:344: "busybox-mount" [0cd991e9-8bb7-41b4-94a1-77db7315b608] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [0cd991e9-8bb7-41b4-94a1-77db7315b608] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [0cd991e9-8bb7-41b4-94a1-77db7315b608] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.002760773s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-576160 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-576160 /tmp/TestFunctionalparallelMountCmdany-port4188957442/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.64s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1327: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1332: Took "308.725093ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1341: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1346: Took "56.03994ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1378: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1383: Took "390.728721ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1391: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1396: Took "64.65832ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:516: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-576160 docker-env) && out/minikube-linux-amd64 status -p functional-576160"
functional_test.go:539: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-576160 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-576160 /tmp/TestFunctionalparallelMountCmdspecific-port395045951/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-576160 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (220.252357ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0217 11:45:14.275374   84502 retry.go:31] will retry after 452.98489ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-576160 /tmp/TestFunctionalparallelMountCmdspecific-port395045951/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-576160 ssh "sudo umount -f /mount-9p": exit status 1 (199.461088ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-576160 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-576160 /tmp/TestFunctionalparallelMountCmdspecific-port395045951/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-576160 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1333303299/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-576160 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1333303299/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-576160 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1333303299/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-576160 ssh "findmnt -T" /mount1: exit status 1 (231.184758ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0217 11:45:15.991687   84502 retry.go:31] will retry after 706.942488ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-576160 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-576160 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1333303299/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-576160 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1333303299/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-576160 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1333303299/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1476: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1506: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 service list -o json
functional_test.go:1511: Took "440.930781ms" to run "out/minikube-linux-amd64 -p functional-576160 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1526: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 service --namespace=default --https --url hello-node
functional_test.go:1539: found endpoint: https://192.168.39.213:30276
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1557: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1576: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 service hello-node --url
functional_test.go:1582: found endpoint for hello-node: http://192.168.39.213:30276
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2273: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 image ls --format short --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-576160 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.1
registry.k8s.io/kube-proxy:v1.32.1
registry.k8s.io/kube-controller-manager:v1.32.1
registry.k8s.io/kube-apiserver:v1.32.1
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-576160
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-576160
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-576160 image ls --format short --alsologtostderr:
I0217 11:45:28.003832   93107 out.go:345] Setting OutFile to fd 1 ...
I0217 11:45:28.004114   93107 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0217 11:45:28.004124   93107 out.go:358] Setting ErrFile to fd 2...
I0217 11:45:28.004128   93107 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0217 11:45:28.004304   93107 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20427-77349/.minikube/bin
I0217 11:45:28.004946   93107 config.go:182] Loaded profile config "functional-576160": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0217 11:45:28.005053   93107 config.go:182] Loaded profile config "functional-576160": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0217 11:45:28.005458   93107 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0217 11:45:28.005513   93107 main.go:141] libmachine: Launching plugin server for driver kvm2
I0217 11:45:28.021041   93107 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34199
I0217 11:45:28.021620   93107 main.go:141] libmachine: () Calling .GetVersion
I0217 11:45:28.022275   93107 main.go:141] libmachine: Using API Version  1
I0217 11:45:28.022301   93107 main.go:141] libmachine: () Calling .SetConfigRaw
I0217 11:45:28.022621   93107 main.go:141] libmachine: () Calling .GetMachineName
I0217 11:45:28.022820   93107 main.go:141] libmachine: (functional-576160) Calling .GetState
I0217 11:45:28.024586   93107 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0217 11:45:28.024636   93107 main.go:141] libmachine: Launching plugin server for driver kvm2
I0217 11:45:28.039342   93107 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46877
I0217 11:45:28.039774   93107 main.go:141] libmachine: () Calling .GetVersion
I0217 11:45:28.040239   93107 main.go:141] libmachine: Using API Version  1
I0217 11:45:28.040263   93107 main.go:141] libmachine: () Calling .SetConfigRaw
I0217 11:45:28.040612   93107 main.go:141] libmachine: () Calling .GetMachineName
I0217 11:45:28.040775   93107 main.go:141] libmachine: (functional-576160) Calling .DriverName
I0217 11:45:28.040993   93107 ssh_runner.go:195] Run: systemctl --version
I0217 11:45:28.041020   93107 main.go:141] libmachine: (functional-576160) Calling .GetSSHHostname
I0217 11:45:28.043807   93107 main.go:141] libmachine: (functional-576160) DBG | domain functional-576160 has defined MAC address 52:54:00:42:10:c7 in network mk-functional-576160
I0217 11:45:28.044239   93107 main.go:141] libmachine: (functional-576160) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:10:c7", ip: ""} in network mk-functional-576160: {Iface:virbr1 ExpiryTime:2025-02-17 12:42:18 +0000 UTC Type:0 Mac:52:54:00:42:10:c7 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:functional-576160 Clientid:01:52:54:00:42:10:c7}
I0217 11:45:28.044281   93107 main.go:141] libmachine: (functional-576160) DBG | domain functional-576160 has defined IP address 192.168.39.213 and MAC address 52:54:00:42:10:c7 in network mk-functional-576160
I0217 11:45:28.044393   93107 main.go:141] libmachine: (functional-576160) Calling .GetSSHPort
I0217 11:45:28.044573   93107 main.go:141] libmachine: (functional-576160) Calling .GetSSHKeyPath
I0217 11:45:28.044705   93107 main.go:141] libmachine: (functional-576160) Calling .GetSSHUsername
I0217 11:45:28.044882   93107 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/functional-576160/id_rsa Username:docker}
I0217 11:45:28.123710   93107 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0217 11:45:28.146191   93107 main.go:141] libmachine: Making call to close driver server
I0217 11:45:28.146203   93107 main.go:141] libmachine: (functional-576160) Calling .Close
I0217 11:45:28.146470   93107 main.go:141] libmachine: Successfully made call to close driver server
I0217 11:45:28.146492   93107 main.go:141] libmachine: Making call to close connection to plugin binary
I0217 11:45:28.146495   93107 main.go:141] libmachine: (functional-576160) DBG | Closing plugin on server side
I0217 11:45:28.146503   93107 main.go:141] libmachine: Making call to close driver server
I0217 11:45:28.146514   93107 main.go:141] libmachine: (functional-576160) Calling .Close
I0217 11:45:28.146747   93107 main.go:141] libmachine: Successfully made call to close driver server
I0217 11:45:28.146763   93107 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 image ls --format table --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-576160 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/kicbase/echo-server               | functional-576160 | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-576160 | 3345a30f046a6 | 30B    |
| registry.k8s.io/kube-scheduler              | v1.32.1           | 2b0d6572d062c | 69.6MB |
| registry.k8s.io/kube-proxy                  | v1.32.1           | e29f9c7391fd9 | 94MB   |
| registry.k8s.io/etcd                        | 3.5.16-0          | a9e7e6b294baf | 150MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| localhost/my-image                          | functional-576160 | b4eb0cc948e2a | 1.24MB |
| registry.k8s.io/kube-apiserver              | v1.32.1           | 95c0bda56fc4d | 97MB   |
| registry.k8s.io/kube-controller-manager     | v1.32.1           | 019ee182b58e2 | 89.7MB |
| registry.k8s.io/coredns/coredns             | v1.11.3           | c69fa2e9cbf5f | 61.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-576160 image ls --format table --alsologtostderr:
I0217 11:45:32.113233   93271 out.go:345] Setting OutFile to fd 1 ...
I0217 11:45:32.113510   93271 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0217 11:45:32.113521   93271 out.go:358] Setting ErrFile to fd 2...
I0217 11:45:32.113525   93271 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0217 11:45:32.113686   93271 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20427-77349/.minikube/bin
I0217 11:45:32.114248   93271 config.go:182] Loaded profile config "functional-576160": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0217 11:45:32.114344   93271 config.go:182] Loaded profile config "functional-576160": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0217 11:45:32.114685   93271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0217 11:45:32.114741   93271 main.go:141] libmachine: Launching plugin server for driver kvm2
I0217 11:45:32.129764   93271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36157
I0217 11:45:32.130287   93271 main.go:141] libmachine: () Calling .GetVersion
I0217 11:45:32.130930   93271 main.go:141] libmachine: Using API Version  1
I0217 11:45:32.130951   93271 main.go:141] libmachine: () Calling .SetConfigRaw
I0217 11:45:32.131350   93271 main.go:141] libmachine: () Calling .GetMachineName
I0217 11:45:32.131549   93271 main.go:141] libmachine: (functional-576160) Calling .GetState
I0217 11:45:32.133329   93271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0217 11:45:32.133376   93271 main.go:141] libmachine: Launching plugin server for driver kvm2
I0217 11:45:32.147491   93271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46139
I0217 11:45:32.147935   93271 main.go:141] libmachine: () Calling .GetVersion
I0217 11:45:32.148484   93271 main.go:141] libmachine: Using API Version  1
I0217 11:45:32.148511   93271 main.go:141] libmachine: () Calling .SetConfigRaw
I0217 11:45:32.148837   93271 main.go:141] libmachine: () Calling .GetMachineName
I0217 11:45:32.149018   93271 main.go:141] libmachine: (functional-576160) Calling .DriverName
I0217 11:45:32.149252   93271 ssh_runner.go:195] Run: systemctl --version
I0217 11:45:32.149277   93271 main.go:141] libmachine: (functional-576160) Calling .GetSSHHostname
I0217 11:45:32.152091   93271 main.go:141] libmachine: (functional-576160) DBG | domain functional-576160 has defined MAC address 52:54:00:42:10:c7 in network mk-functional-576160
I0217 11:45:32.152501   93271 main.go:141] libmachine: (functional-576160) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:10:c7", ip: ""} in network mk-functional-576160: {Iface:virbr1 ExpiryTime:2025-02-17 12:42:18 +0000 UTC Type:0 Mac:52:54:00:42:10:c7 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:functional-576160 Clientid:01:52:54:00:42:10:c7}
I0217 11:45:32.152538   93271 main.go:141] libmachine: (functional-576160) DBG | domain functional-576160 has defined IP address 192.168.39.213 and MAC address 52:54:00:42:10:c7 in network mk-functional-576160
I0217 11:45:32.152658   93271 main.go:141] libmachine: (functional-576160) Calling .GetSSHPort
I0217 11:45:32.152848   93271 main.go:141] libmachine: (functional-576160) Calling .GetSSHKeyPath
I0217 11:45:32.153010   93271 main.go:141] libmachine: (functional-576160) Calling .GetSSHUsername
I0217 11:45:32.153173   93271 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/functional-576160/id_rsa Username:docker}
I0217 11:45:32.234047   93271 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0217 11:45:32.267687   93271 main.go:141] libmachine: Making call to close driver server
I0217 11:45:32.267717   93271 main.go:141] libmachine: (functional-576160) Calling .Close
I0217 11:45:32.268032   93271 main.go:141] libmachine: Successfully made call to close driver server
I0217 11:45:32.268097   93271 main.go:141] libmachine: Making call to close connection to plugin binary
I0217 11:45:32.268111   93271 main.go:141] libmachine: Making call to close driver server
I0217 11:45:32.268118   93271 main.go:141] libmachine: (functional-576160) Calling .Close
I0217 11:45:32.268049   93271 main.go:141] libmachine: (functional-576160) DBG | Closing plugin on server side
I0217 11:45:32.268360   93271 main.go:141] libmachine: Successfully made call to close driver server
I0217 11:45:32.268381   93271 main.go:141] libmachine: (functional-576160) DBG | Closing plugin on server side
I0217 11:45:32.268388   93271 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 image ls --format json --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-576160 image ls --format json --alsologtostderr:
[{"id":"a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"150000000"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61800000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-576160"],"size":"4940000"},{"id":"95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.1"],"size":"97000000"},{"id":"019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.1"],"size":"89700000"},{"id":"e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.32.1"],"size":"94000000"},{"id":"115053965e86b2df4d78af78
d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"3345a30f046a6422d4b98b1a9b4dd87fbcda8a6c41d33b618dc0600e20931175","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-576160"],"size":"30"},{"id":"2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf
1faa1b23d1","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.1"],"size":"69600000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"b4eb0cc948e2a6381c5fdf49cfa1ef5af3834a3dbf74e8b09be696278d58b583","repoDigests":[],"repoTags":["localhost/my-image:functional-576160"],"size":"1240000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"}]
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-576160 image ls --format json --alsologtostderr:
I0217 11:45:31.907342   93248 out.go:345] Setting OutFile to fd 1 ...
I0217 11:45:31.907453   93248 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0217 11:45:31.907462   93248 out.go:358] Setting ErrFile to fd 2...
I0217 11:45:31.907466   93248 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0217 11:45:31.907637   93248 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20427-77349/.minikube/bin
I0217 11:45:31.908201   93248 config.go:182] Loaded profile config "functional-576160": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0217 11:45:31.908300   93248 config.go:182] Loaded profile config "functional-576160": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0217 11:45:31.908652   93248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0217 11:45:31.908707   93248 main.go:141] libmachine: Launching plugin server for driver kvm2
I0217 11:45:31.923378   93248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38111
I0217 11:45:31.923916   93248 main.go:141] libmachine: () Calling .GetVersion
I0217 11:45:31.924630   93248 main.go:141] libmachine: Using API Version  1
I0217 11:45:31.924659   93248 main.go:141] libmachine: () Calling .SetConfigRaw
I0217 11:45:31.925018   93248 main.go:141] libmachine: () Calling .GetMachineName
I0217 11:45:31.925207   93248 main.go:141] libmachine: (functional-576160) Calling .GetState
I0217 11:45:31.927024   93248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0217 11:45:31.927071   93248 main.go:141] libmachine: Launching plugin server for driver kvm2
I0217 11:45:31.942270   93248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34369
I0217 11:45:31.942796   93248 main.go:141] libmachine: () Calling .GetVersion
I0217 11:45:31.943326   93248 main.go:141] libmachine: Using API Version  1
I0217 11:45:31.943373   93248 main.go:141] libmachine: () Calling .SetConfigRaw
I0217 11:45:31.943752   93248 main.go:141] libmachine: () Calling .GetMachineName
I0217 11:45:31.943993   93248 main.go:141] libmachine: (functional-576160) Calling .DriverName
I0217 11:45:31.944264   93248 ssh_runner.go:195] Run: systemctl --version
I0217 11:45:31.944298   93248 main.go:141] libmachine: (functional-576160) Calling .GetSSHHostname
I0217 11:45:31.947204   93248 main.go:141] libmachine: (functional-576160) DBG | domain functional-576160 has defined MAC address 52:54:00:42:10:c7 in network mk-functional-576160
I0217 11:45:31.947663   93248 main.go:141] libmachine: (functional-576160) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:10:c7", ip: ""} in network mk-functional-576160: {Iface:virbr1 ExpiryTime:2025-02-17 12:42:18 +0000 UTC Type:0 Mac:52:54:00:42:10:c7 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:functional-576160 Clientid:01:52:54:00:42:10:c7}
I0217 11:45:31.947702   93248 main.go:141] libmachine: (functional-576160) DBG | domain functional-576160 has defined IP address 192.168.39.213 and MAC address 52:54:00:42:10:c7 in network mk-functional-576160
I0217 11:45:31.947800   93248 main.go:141] libmachine: (functional-576160) Calling .GetSSHPort
I0217 11:45:31.947960   93248 main.go:141] libmachine: (functional-576160) Calling .GetSSHKeyPath
I0217 11:45:31.948147   93248 main.go:141] libmachine: (functional-576160) Calling .GetSSHUsername
I0217 11:45:31.948314   93248 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/functional-576160/id_rsa Username:docker}
I0217 11:45:32.027831   93248 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0217 11:45:32.057166   93248 main.go:141] libmachine: Making call to close driver server
I0217 11:45:32.057185   93248 main.go:141] libmachine: (functional-576160) Calling .Close
I0217 11:45:32.057506   93248 main.go:141] libmachine: Successfully made call to close driver server
I0217 11:45:32.057542   93248 main.go:141] libmachine: Making call to close connection to plugin binary
I0217 11:45:32.057552   93248 main.go:141] libmachine: Making call to close driver server
I0217 11:45:32.057561   93248 main.go:141] libmachine: (functional-576160) Calling .Close
I0217 11:45:32.057571   93248 main.go:141] libmachine: (functional-576160) DBG | Closing plugin on server side
I0217 11:45:32.057793   93248 main.go:141] libmachine: Successfully made call to close driver server
I0217 11:45:32.057813   93248 main.go:141] libmachine: Making call to close connection to plugin binary
I0217 11:45:32.057833   93248 main.go:141] libmachine: (functional-576160) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 image ls --format yaml --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-576160 image ls --format yaml --alsologtostderr:
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 3345a30f046a6422d4b98b1a9b4dd87fbcda8a6c41d33b618dc0600e20931175
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-576160
size: "30"
- id: a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "150000000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-576160
size: "4940000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.1
size: "69600000"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61800000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: 95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.1
size: "97000000"
- id: 019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.1
size: "89700000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.32.1
size: "94000000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"

                                                
                                                
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-576160 image ls --format yaml --alsologtostderr:
I0217 11:45:28.204156   93131 out.go:345] Setting OutFile to fd 1 ...
I0217 11:45:28.204295   93131 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0217 11:45:28.204311   93131 out.go:358] Setting ErrFile to fd 2...
I0217 11:45:28.204318   93131 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0217 11:45:28.204602   93131 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20427-77349/.minikube/bin
I0217 11:45:28.205325   93131 config.go:182] Loaded profile config "functional-576160": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0217 11:45:28.205432   93131 config.go:182] Loaded profile config "functional-576160": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0217 11:45:28.205805   93131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0217 11:45:28.205877   93131 main.go:141] libmachine: Launching plugin server for driver kvm2
I0217 11:45:28.221870   93131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40423
I0217 11:45:28.222444   93131 main.go:141] libmachine: () Calling .GetVersion
I0217 11:45:28.223004   93131 main.go:141] libmachine: Using API Version  1
I0217 11:45:28.223038   93131 main.go:141] libmachine: () Calling .SetConfigRaw
I0217 11:45:28.223351   93131 main.go:141] libmachine: () Calling .GetMachineName
I0217 11:45:28.223602   93131 main.go:141] libmachine: (functional-576160) Calling .GetState
I0217 11:45:28.225545   93131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0217 11:45:28.225597   93131 main.go:141] libmachine: Launching plugin server for driver kvm2
I0217 11:45:28.240716   93131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42387
I0217 11:45:28.241174   93131 main.go:141] libmachine: () Calling .GetVersion
I0217 11:45:28.241686   93131 main.go:141] libmachine: Using API Version  1
I0217 11:45:28.241702   93131 main.go:141] libmachine: () Calling .SetConfigRaw
I0217 11:45:28.242029   93131 main.go:141] libmachine: () Calling .GetMachineName
I0217 11:45:28.242287   93131 main.go:141] libmachine: (functional-576160) Calling .DriverName
I0217 11:45:28.242514   93131 ssh_runner.go:195] Run: systemctl --version
I0217 11:45:28.242549   93131 main.go:141] libmachine: (functional-576160) Calling .GetSSHHostname
I0217 11:45:28.245240   93131 main.go:141] libmachine: (functional-576160) DBG | domain functional-576160 has defined MAC address 52:54:00:42:10:c7 in network mk-functional-576160
I0217 11:45:28.245654   93131 main.go:141] libmachine: (functional-576160) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:10:c7", ip: ""} in network mk-functional-576160: {Iface:virbr1 ExpiryTime:2025-02-17 12:42:18 +0000 UTC Type:0 Mac:52:54:00:42:10:c7 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:functional-576160 Clientid:01:52:54:00:42:10:c7}
I0217 11:45:28.245697   93131 main.go:141] libmachine: (functional-576160) DBG | domain functional-576160 has defined IP address 192.168.39.213 and MAC address 52:54:00:42:10:c7 in network mk-functional-576160
I0217 11:45:28.245817   93131 main.go:141] libmachine: (functional-576160) Calling .GetSSHPort
I0217 11:45:28.245998   93131 main.go:141] libmachine: (functional-576160) Calling .GetSSHKeyPath
I0217 11:45:28.246151   93131 main.go:141] libmachine: (functional-576160) Calling .GetSSHUsername
I0217 11:45:28.246284   93131 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/functional-576160/id_rsa Username:docker}
I0217 11:45:28.323177   93131 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0217 11:45:28.344026   93131 main.go:141] libmachine: Making call to close driver server
I0217 11:45:28.344039   93131 main.go:141] libmachine: (functional-576160) Calling .Close
I0217 11:45:28.344339   93131 main.go:141] libmachine: Successfully made call to close driver server
I0217 11:45:28.344363   93131 main.go:141] libmachine: Making call to close connection to plugin binary
I0217 11:45:28.344401   93131 main.go:141] libmachine: (functional-576160) DBG | Closing plugin on server side
I0217 11:45:28.344453   93131 main.go:141] libmachine: Making call to close driver server
I0217 11:45:28.344466   93131 main.go:141] libmachine: (functional-576160) Calling .Close
I0217 11:45:28.344732   93131 main.go:141] libmachine: Successfully made call to close driver server
I0217 11:45:28.344747   93131 main.go:141] libmachine: (functional-576160) DBG | Closing plugin on server side
I0217 11:45:28.344755   93131 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 ssh pgrep buildkitd
functional_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-576160 ssh pgrep buildkitd: exit status 1 (216.275954ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:332: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 image build -t localhost/my-image:functional-576160 testdata/build --alsologtostderr
functional_test.go:332: (dbg) Done: out/minikube-linux-amd64 -p functional-576160 image build -t localhost/my-image:functional-576160 testdata/build --alsologtostderr: (3.071786241s)
functional_test.go:340: (dbg) Stderr: out/minikube-linux-amd64 -p functional-576160 image build -t localhost/my-image:functional-576160 testdata/build --alsologtostderr:
I0217 11:45:28.619435   93184 out.go:345] Setting OutFile to fd 1 ...
I0217 11:45:28.619531   93184 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0217 11:45:28.619540   93184 out.go:358] Setting ErrFile to fd 2...
I0217 11:45:28.619545   93184 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0217 11:45:28.619767   93184 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20427-77349/.minikube/bin
I0217 11:45:28.620374   93184 config.go:182] Loaded profile config "functional-576160": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0217 11:45:28.621013   93184 config.go:182] Loaded profile config "functional-576160": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0217 11:45:28.621543   93184 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0217 11:45:28.621594   93184 main.go:141] libmachine: Launching plugin server for driver kvm2
I0217 11:45:28.636533   93184 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43419
I0217 11:45:28.637067   93184 main.go:141] libmachine: () Calling .GetVersion
I0217 11:45:28.637670   93184 main.go:141] libmachine: Using API Version  1
I0217 11:45:28.637724   93184 main.go:141] libmachine: () Calling .SetConfigRaw
I0217 11:45:28.638069   93184 main.go:141] libmachine: () Calling .GetMachineName
I0217 11:45:28.638272   93184 main.go:141] libmachine: (functional-576160) Calling .GetState
I0217 11:45:28.640243   93184 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0217 11:45:28.640289   93184 main.go:141] libmachine: Launching plugin server for driver kvm2
I0217 11:45:28.654936   93184 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43511
I0217 11:45:28.655459   93184 main.go:141] libmachine: () Calling .GetVersion
I0217 11:45:28.655997   93184 main.go:141] libmachine: Using API Version  1
I0217 11:45:28.656020   93184 main.go:141] libmachine: () Calling .SetConfigRaw
I0217 11:45:28.656330   93184 main.go:141] libmachine: () Calling .GetMachineName
I0217 11:45:28.656558   93184 main.go:141] libmachine: (functional-576160) Calling .DriverName
I0217 11:45:28.656771   93184 ssh_runner.go:195] Run: systemctl --version
I0217 11:45:28.656804   93184 main.go:141] libmachine: (functional-576160) Calling .GetSSHHostname
I0217 11:45:28.659815   93184 main.go:141] libmachine: (functional-576160) DBG | domain functional-576160 has defined MAC address 52:54:00:42:10:c7 in network mk-functional-576160
I0217 11:45:28.660292   93184 main.go:141] libmachine: (functional-576160) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:10:c7", ip: ""} in network mk-functional-576160: {Iface:virbr1 ExpiryTime:2025-02-17 12:42:18 +0000 UTC Type:0 Mac:52:54:00:42:10:c7 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:functional-576160 Clientid:01:52:54:00:42:10:c7}
I0217 11:45:28.660326   93184 main.go:141] libmachine: (functional-576160) DBG | domain functional-576160 has defined IP address 192.168.39.213 and MAC address 52:54:00:42:10:c7 in network mk-functional-576160
I0217 11:45:28.660463   93184 main.go:141] libmachine: (functional-576160) Calling .GetSSHPort
I0217 11:45:28.660755   93184 main.go:141] libmachine: (functional-576160) Calling .GetSSHKeyPath
I0217 11:45:28.661007   93184 main.go:141] libmachine: (functional-576160) Calling .GetSSHUsername
I0217 11:45:28.661191   93184 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/functional-576160/id_rsa Username:docker}
I0217 11:45:28.760246   93184 build_images.go:161] Building image from path: /tmp/build.3684653883.tar
I0217 11:45:28.760335   93184 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0217 11:45:28.774738   93184 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3684653883.tar
I0217 11:45:28.783055   93184 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3684653883.tar: stat -c "%s %y" /var/lib/minikube/build/build.3684653883.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3684653883.tar': No such file or directory
I0217 11:45:28.783095   93184 ssh_runner.go:362] scp /tmp/build.3684653883.tar --> /var/lib/minikube/build/build.3684653883.tar (3072 bytes)
I0217 11:45:28.816481   93184 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3684653883
I0217 11:45:28.828809   93184 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3684653883 -xf /var/lib/minikube/build/build.3684653883.tar
I0217 11:45:28.839309   93184 docker.go:360] Building image: /var/lib/minikube/build/build.3684653883
I0217 11:45:28.839393   93184 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-576160 /var/lib/minikube/build/build.3684653883
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.0s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:b4eb0cc948e2a6381c5fdf49cfa1ef5af3834a3dbf74e8b09be696278d58b583
#8 writing image sha256:b4eb0cc948e2a6381c5fdf49cfa1ef5af3834a3dbf74e8b09be696278d58b583 done
#8 naming to localhost/my-image:functional-576160 done
#8 DONE 0.1s
I0217 11:45:31.611012   93184 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-576160 /var/lib/minikube/build/build.3684653883: (2.77158534s)
I0217 11:45:31.611133   93184 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3684653883
I0217 11:45:31.621079   93184 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3684653883.tar
I0217 11:45:31.631281   93184 build_images.go:217] Built localhost/my-image:functional-576160 from /tmp/build.3684653883.tar
I0217 11:45:31.631317   93184 build_images.go:133] succeeded building to: functional-576160
I0217 11:45:31.631322   93184 build_images.go:134] failed building to: 
I0217 11:45:31.631351   93184 main.go:141] libmachine: Making call to close driver server
I0217 11:45:31.631386   93184 main.go:141] libmachine: (functional-576160) Calling .Close
I0217 11:45:31.631707   93184 main.go:141] libmachine: Successfully made call to close driver server
I0217 11:45:31.631727   93184 main.go:141] libmachine: Making call to close connection to plugin binary
I0217 11:45:31.631736   93184 main.go:141] libmachine: Making call to close driver server
I0217 11:45:31.631744   93184 main.go:141] libmachine: (functional-576160) Calling .Close
I0217 11:45:31.631743   93184 main.go:141] libmachine: (functional-576160) DBG | Closing plugin on server side
I0217 11:45:31.631995   93184 main.go:141] libmachine: Successfully made call to close driver server
I0217 11:45:31.632013   93184 main.go:141] libmachine: Making call to close connection to plugin binary
I0217 11:45:31.632031   93184 main.go:141] libmachine: (functional-576160) DBG | Closing plugin on server side
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:359: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:359: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.585252763s)
functional_test.go:364: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-576160
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:372: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 image load --daemon kicbase/echo-server:functional-576160 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 image load --daemon kicbase/echo-server:functional-576160 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:252: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:257: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-576160
functional_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 image load --daemon kicbase/echo-server:functional-576160 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:397: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 image save kicbase/echo-server:functional-576160 /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 image rm kicbase/echo-server:functional-576160 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:426: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 image load /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:436: (dbg) Run:  docker rmi kicbase/echo-server:functional-576160
functional_test.go:441: (dbg) Run:  out/minikube-linux-amd64 -p functional-576160 image save --daemon kicbase/echo-server:functional-576160 --alsologtostderr
functional_test.go:449: (dbg) Run:  docker image inspect kicbase/echo-server:functional-576160
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-576160
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:215: (dbg) Run:  docker rmi -f localhost/my-image:functional-576160
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:223: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-576160
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestGvisorAddon (214.4s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-061450 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:52: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-061450 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m6.102667702s)
gvisor_addon_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-061450 cache add gcr.io/k8s-minikube/gvisor-addon:2
gvisor_addon_test.go:58: (dbg) Done: out/minikube-linux-amd64 -p gvisor-061450 cache add gcr.io/k8s-minikube/gvisor-addon:2: (23.201309457s)
gvisor_addon_test.go:63: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-061450 addons enable gvisor
gvisor_addon_test.go:63: (dbg) Done: out/minikube-linux-amd64 -p gvisor-061450 addons enable gvisor: (4.197062297s)
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [d25ce11d-9579-46bf-8b84-1a52741a1f53] Running
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.003566896s
gvisor_addon_test.go:73: (dbg) Run:  kubectl --context gvisor-061450 replace --force -f testdata/nginx-gvisor.yaml
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [83de650b-6df0-40c9-9ca0-b32e3781f090] Pending
helpers_test.go:344: "nginx-gvisor" [83de650b-6df0-40c9-9ca0-b32e3781f090] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-gvisor" [83de650b-6df0-40c9-9ca0-b32e3781f090] Running
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 28.007212605s
gvisor_addon_test.go:83: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-061450
gvisor_addon_test.go:83: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-061450: (7.36238115s)
gvisor_addon_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-061450 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-061450 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m7.24603008s)
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [d25ce11d-9579-46bf-8b84-1a52741a1f53] Running
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.004382253s
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [83de650b-6df0-40c9-9ca0-b32e3781f090] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.003315753s
helpers_test.go:175: Cleaning up "gvisor-061450" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-061450
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p gvisor-061450: (1.122561054s)
--- PASS: TestGvisorAddon (214.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (219.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-783738 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2 
E0217 11:46:18.383930   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/addons-603759/client.crt: no such file or directory" logger="UnhandledError"
E0217 11:48:34.520400   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/addons-603759/client.crt: no such file or directory" logger="UnhandledError"
E0217 11:49:02.229748   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/addons-603759/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-783738 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2 : (3m39.216426057s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (219.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-783738 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-783738 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-783738 -- rollout status deployment/busybox: (3.273108595s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-783738 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-783738 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-783738 -- exec busybox-58667487b6-2q9md -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-783738 -- exec busybox-58667487b6-mp8w2 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-783738 -- exec busybox-58667487b6-pcd6c -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-783738 -- exec busybox-58667487b6-2q9md -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-783738 -- exec busybox-58667487b6-mp8w2 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-783738 -- exec busybox-58667487b6-pcd6c -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-783738 -- exec busybox-58667487b6-2q9md -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-783738 -- exec busybox-58667487b6-mp8w2 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-783738 -- exec busybox-58667487b6-pcd6c -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-783738 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-783738 -- exec busybox-58667487b6-2q9md -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-783738 -- exec busybox-58667487b6-2q9md -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-783738 -- exec busybox-58667487b6-mp8w2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-783738 -- exec busybox-58667487b6-mp8w2 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-783738 -- exec busybox-58667487b6-pcd6c -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-783738 -- exec busybox-58667487b6-pcd6c -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (62.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-783738 -v=7 --alsologtostderr
E0217 11:50:04.214565   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/functional-576160/client.crt: no such file or directory" logger="UnhandledError"
E0217 11:50:04.220984   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/functional-576160/client.crt: no such file or directory" logger="UnhandledError"
E0217 11:50:04.232360   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/functional-576160/client.crt: no such file or directory" logger="UnhandledError"
E0217 11:50:04.253803   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/functional-576160/client.crt: no such file or directory" logger="UnhandledError"
E0217 11:50:04.295255   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/functional-576160/client.crt: no such file or directory" logger="UnhandledError"
E0217 11:50:04.376726   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/functional-576160/client.crt: no such file or directory" logger="UnhandledError"
E0217 11:50:04.538496   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/functional-576160/client.crt: no such file or directory" logger="UnhandledError"
E0217 11:50:04.860279   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/functional-576160/client.crt: no such file or directory" logger="UnhandledError"
E0217 11:50:05.502596   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/functional-576160/client.crt: no such file or directory" logger="UnhandledError"
E0217 11:50:06.784119   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/functional-576160/client.crt: no such file or directory" logger="UnhandledError"
E0217 11:50:09.345998   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/functional-576160/client.crt: no such file or directory" logger="UnhandledError"
E0217 11:50:14.467934   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/functional-576160/client.crt: no such file or directory" logger="UnhandledError"
E0217 11:50:24.710046   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/functional-576160/client.crt: no such file or directory" logger="UnhandledError"
E0217 11:50:45.191370   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/functional-576160/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-783738 -v=7 --alsologtostderr: (1m1.302768297s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (62.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-783738 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 cp testdata/cp-test.txt ha-783738:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 ssh -n ha-783738 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 cp ha-783738:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3703533036/001/cp-test_ha-783738.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 ssh -n ha-783738 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 cp ha-783738:/home/docker/cp-test.txt ha-783738-m02:/home/docker/cp-test_ha-783738_ha-783738-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 ssh -n ha-783738 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 ssh -n ha-783738-m02 "sudo cat /home/docker/cp-test_ha-783738_ha-783738-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 cp ha-783738:/home/docker/cp-test.txt ha-783738-m03:/home/docker/cp-test_ha-783738_ha-783738-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 ssh -n ha-783738 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 ssh -n ha-783738-m03 "sudo cat /home/docker/cp-test_ha-783738_ha-783738-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 cp ha-783738:/home/docker/cp-test.txt ha-783738-m04:/home/docker/cp-test_ha-783738_ha-783738-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 ssh -n ha-783738 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 ssh -n ha-783738-m04 "sudo cat /home/docker/cp-test_ha-783738_ha-783738-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 cp testdata/cp-test.txt ha-783738-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 ssh -n ha-783738-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 cp ha-783738-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3703533036/001/cp-test_ha-783738-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 ssh -n ha-783738-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 cp ha-783738-m02:/home/docker/cp-test.txt ha-783738:/home/docker/cp-test_ha-783738-m02_ha-783738.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 ssh -n ha-783738-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 ssh -n ha-783738 "sudo cat /home/docker/cp-test_ha-783738-m02_ha-783738.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 cp ha-783738-m02:/home/docker/cp-test.txt ha-783738-m03:/home/docker/cp-test_ha-783738-m02_ha-783738-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 ssh -n ha-783738-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 ssh -n ha-783738-m03 "sudo cat /home/docker/cp-test_ha-783738-m02_ha-783738-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 cp ha-783738-m02:/home/docker/cp-test.txt ha-783738-m04:/home/docker/cp-test_ha-783738-m02_ha-783738-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 ssh -n ha-783738-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 ssh -n ha-783738-m04 "sudo cat /home/docker/cp-test_ha-783738-m02_ha-783738-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 cp testdata/cp-test.txt ha-783738-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 ssh -n ha-783738-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 cp ha-783738-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3703533036/001/cp-test_ha-783738-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 ssh -n ha-783738-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 cp ha-783738-m03:/home/docker/cp-test.txt ha-783738:/home/docker/cp-test_ha-783738-m03_ha-783738.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 ssh -n ha-783738-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 ssh -n ha-783738 "sudo cat /home/docker/cp-test_ha-783738-m03_ha-783738.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 cp ha-783738-m03:/home/docker/cp-test.txt ha-783738-m02:/home/docker/cp-test_ha-783738-m03_ha-783738-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 ssh -n ha-783738-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 ssh -n ha-783738-m02 "sudo cat /home/docker/cp-test_ha-783738-m03_ha-783738-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 cp ha-783738-m03:/home/docker/cp-test.txt ha-783738-m04:/home/docker/cp-test_ha-783738-m03_ha-783738-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 ssh -n ha-783738-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 ssh -n ha-783738-m04 "sudo cat /home/docker/cp-test_ha-783738-m03_ha-783738-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 cp testdata/cp-test.txt ha-783738-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 ssh -n ha-783738-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 cp ha-783738-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3703533036/001/cp-test_ha-783738-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 ssh -n ha-783738-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 cp ha-783738-m04:/home/docker/cp-test.txt ha-783738:/home/docker/cp-test_ha-783738-m04_ha-783738.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 ssh -n ha-783738-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 ssh -n ha-783738 "sudo cat /home/docker/cp-test_ha-783738-m04_ha-783738.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 cp ha-783738-m04:/home/docker/cp-test.txt ha-783738-m02:/home/docker/cp-test_ha-783738-m04_ha-783738-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 ssh -n ha-783738-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 ssh -n ha-783738-m02 "sudo cat /home/docker/cp-test_ha-783738-m04_ha-783738-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 cp ha-783738-m04:/home/docker/cp-test.txt ha-783738-m03:/home/docker/cp-test_ha-783738-m04_ha-783738-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 ssh -n ha-783738-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 ssh -n ha-783738-m03 "sudo cat /home/docker/cp-test_ha-783738-m04_ha-783738-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-783738 node stop m02 -v=7 --alsologtostderr: (13.303423772s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-783738 status -v=7 --alsologtostderr: exit status 7 (611.28965ms)

                                                
                                                
-- stdout --
	ha-783738
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-783738-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-783738-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-783738-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0217 11:51:15.888089   97897 out.go:345] Setting OutFile to fd 1 ...
	I0217 11:51:15.888213   97897 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 11:51:15.888225   97897 out.go:358] Setting ErrFile to fd 2...
	I0217 11:51:15.888231   97897 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 11:51:15.888434   97897 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20427-77349/.minikube/bin
	I0217 11:51:15.888609   97897 out.go:352] Setting JSON to false
	I0217 11:51:15.888638   97897 mustload.go:65] Loading cluster: ha-783738
	I0217 11:51:15.888763   97897 notify.go:220] Checking for updates...
	I0217 11:51:15.889086   97897 config.go:182] Loaded profile config "ha-783738": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0217 11:51:15.889134   97897 status.go:174] checking status of ha-783738 ...
	I0217 11:51:15.889630   97897 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0217 11:51:15.889703   97897 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0217 11:51:15.906436   97897 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33403
	I0217 11:51:15.906992   97897 main.go:141] libmachine: () Calling .GetVersion
	I0217 11:51:15.907741   97897 main.go:141] libmachine: Using API Version  1
	I0217 11:51:15.907764   97897 main.go:141] libmachine: () Calling .SetConfigRaw
	I0217 11:51:15.908143   97897 main.go:141] libmachine: () Calling .GetMachineName
	I0217 11:51:15.908325   97897 main.go:141] libmachine: (ha-783738) Calling .GetState
	I0217 11:51:15.910054   97897 status.go:371] ha-783738 host status = "Running" (err=<nil>)
	I0217 11:51:15.910073   97897 host.go:66] Checking if "ha-783738" exists ...
	I0217 11:51:15.910492   97897 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0217 11:51:15.910549   97897 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0217 11:51:15.925291   97897 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36645
	I0217 11:51:15.925762   97897 main.go:141] libmachine: () Calling .GetVersion
	I0217 11:51:15.926238   97897 main.go:141] libmachine: Using API Version  1
	I0217 11:51:15.926259   97897 main.go:141] libmachine: () Calling .SetConfigRaw
	I0217 11:51:15.926589   97897 main.go:141] libmachine: () Calling .GetMachineName
	I0217 11:51:15.926807   97897 main.go:141] libmachine: (ha-783738) Calling .GetIP
	I0217 11:51:15.929527   97897 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:51:15.929954   97897 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:46:14 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:51:15.929979   97897 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:51:15.930146   97897 host.go:66] Checking if "ha-783738" exists ...
	I0217 11:51:15.930423   97897 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0217 11:51:15.930461   97897 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0217 11:51:15.945109   97897 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33317
	I0217 11:51:15.945622   97897 main.go:141] libmachine: () Calling .GetVersion
	I0217 11:51:15.946116   97897 main.go:141] libmachine: Using API Version  1
	I0217 11:51:15.946137   97897 main.go:141] libmachine: () Calling .SetConfigRaw
	I0217 11:51:15.946440   97897 main.go:141] libmachine: () Calling .GetMachineName
	I0217 11:51:15.946608   97897 main.go:141] libmachine: (ha-783738) Calling .DriverName
	I0217 11:51:15.946782   97897 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0217 11:51:15.946802   97897 main.go:141] libmachine: (ha-783738) Calling .GetSSHHostname
	I0217 11:51:15.949898   97897 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:51:15.950380   97897 main.go:141] libmachine: (ha-783738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:6f:65", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:46:14 +0000 UTC Type:0 Mac:52:54:00:fb:6f:65 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-783738 Clientid:01:52:54:00:fb:6f:65}
	I0217 11:51:15.950428   97897 main.go:141] libmachine: (ha-783738) DBG | domain ha-783738 has defined IP address 192.168.39.249 and MAC address 52:54:00:fb:6f:65 in network mk-ha-783738
	I0217 11:51:15.950453   97897 main.go:141] libmachine: (ha-783738) Calling .GetSSHPort
	I0217 11:51:15.950637   97897 main.go:141] libmachine: (ha-783738) Calling .GetSSHKeyPath
	I0217 11:51:15.950793   97897 main.go:141] libmachine: (ha-783738) Calling .GetSSHUsername
	I0217 11:51:15.950951   97897 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738/id_rsa Username:docker}
	I0217 11:51:16.032388   97897 ssh_runner.go:195] Run: systemctl --version
	I0217 11:51:16.038906   97897 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0217 11:51:16.053904   97897 kubeconfig.go:125] found "ha-783738" server: "https://192.168.39.254:8443"
	I0217 11:51:16.053948   97897 api_server.go:166] Checking apiserver status ...
	I0217 11:51:16.053989   97897 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0217 11:51:16.068802   97897 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1975/cgroup
	W0217 11:51:16.078516   97897 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1975/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0217 11:51:16.078587   97897 ssh_runner.go:195] Run: ls
	I0217 11:51:16.082876   97897 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0217 11:51:16.088827   97897 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0217 11:51:16.088853   97897 status.go:463] ha-783738 apiserver status = Running (err=<nil>)
	I0217 11:51:16.088865   97897 status.go:176] ha-783738 status: &{Name:ha-783738 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0217 11:51:16.088884   97897 status.go:174] checking status of ha-783738-m02 ...
	I0217 11:51:16.089210   97897 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0217 11:51:16.089250   97897 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0217 11:51:16.104206   97897 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44741
	I0217 11:51:16.104674   97897 main.go:141] libmachine: () Calling .GetVersion
	I0217 11:51:16.105154   97897 main.go:141] libmachine: Using API Version  1
	I0217 11:51:16.105176   97897 main.go:141] libmachine: () Calling .SetConfigRaw
	I0217 11:51:16.105535   97897 main.go:141] libmachine: () Calling .GetMachineName
	I0217 11:51:16.105688   97897 main.go:141] libmachine: (ha-783738-m02) Calling .GetState
	I0217 11:51:16.107279   97897 status.go:371] ha-783738-m02 host status = "Stopped" (err=<nil>)
	I0217 11:51:16.107292   97897 status.go:384] host is not running, skipping remaining checks
	I0217 11:51:16.107299   97897 status.go:176] ha-783738-m02 status: &{Name:ha-783738-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0217 11:51:16.107335   97897 status.go:174] checking status of ha-783738-m03 ...
	I0217 11:51:16.107756   97897 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0217 11:51:16.107827   97897 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0217 11:51:16.122569   97897 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39755
	I0217 11:51:16.123045   97897 main.go:141] libmachine: () Calling .GetVersion
	I0217 11:51:16.123592   97897 main.go:141] libmachine: Using API Version  1
	I0217 11:51:16.123617   97897 main.go:141] libmachine: () Calling .SetConfigRaw
	I0217 11:51:16.123932   97897 main.go:141] libmachine: () Calling .GetMachineName
	I0217 11:51:16.124109   97897 main.go:141] libmachine: (ha-783738-m03) Calling .GetState
	I0217 11:51:16.125625   97897 status.go:371] ha-783738-m03 host status = "Running" (err=<nil>)
	I0217 11:51:16.125641   97897 host.go:66] Checking if "ha-783738-m03" exists ...
	I0217 11:51:16.126038   97897 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0217 11:51:16.126102   97897 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0217 11:51:16.140731   97897 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40427
	I0217 11:51:16.141165   97897 main.go:141] libmachine: () Calling .GetVersion
	I0217 11:51:16.141663   97897 main.go:141] libmachine: Using API Version  1
	I0217 11:51:16.141692   97897 main.go:141] libmachine: () Calling .SetConfigRaw
	I0217 11:51:16.142030   97897 main.go:141] libmachine: () Calling .GetMachineName
	I0217 11:51:16.142240   97897 main.go:141] libmachine: (ha-783738-m03) Calling .GetIP
	I0217 11:51:16.145197   97897 main.go:141] libmachine: (ha-783738-m03) DBG | domain ha-783738-m03 has defined MAC address 52:54:00:94:f4:d0 in network mk-ha-783738
	I0217 11:51:16.145734   97897 main.go:141] libmachine: (ha-783738-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:f4:d0", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:48:28 +0000 UTC Type:0 Mac:52:54:00:94:f4:d0 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-783738-m03 Clientid:01:52:54:00:94:f4:d0}
	I0217 11:51:16.145775   97897 main.go:141] libmachine: (ha-783738-m03) DBG | domain ha-783738-m03 has defined IP address 192.168.39.216 and MAC address 52:54:00:94:f4:d0 in network mk-ha-783738
	I0217 11:51:16.145887   97897 host.go:66] Checking if "ha-783738-m03" exists ...
	I0217 11:51:16.146192   97897 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0217 11:51:16.146230   97897 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0217 11:51:16.162015   97897 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40419
	I0217 11:51:16.162421   97897 main.go:141] libmachine: () Calling .GetVersion
	I0217 11:51:16.162928   97897 main.go:141] libmachine: Using API Version  1
	I0217 11:51:16.162987   97897 main.go:141] libmachine: () Calling .SetConfigRaw
	I0217 11:51:16.163302   97897 main.go:141] libmachine: () Calling .GetMachineName
	I0217 11:51:16.163565   97897 main.go:141] libmachine: (ha-783738-m03) Calling .DriverName
	I0217 11:51:16.163725   97897 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0217 11:51:16.163743   97897 main.go:141] libmachine: (ha-783738-m03) Calling .GetSSHHostname
	I0217 11:51:16.166783   97897 main.go:141] libmachine: (ha-783738-m03) DBG | domain ha-783738-m03 has defined MAC address 52:54:00:94:f4:d0 in network mk-ha-783738
	I0217 11:51:16.167242   97897 main.go:141] libmachine: (ha-783738-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:f4:d0", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:48:28 +0000 UTC Type:0 Mac:52:54:00:94:f4:d0 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-783738-m03 Clientid:01:52:54:00:94:f4:d0}
	I0217 11:51:16.167260   97897 main.go:141] libmachine: (ha-783738-m03) DBG | domain ha-783738-m03 has defined IP address 192.168.39.216 and MAC address 52:54:00:94:f4:d0 in network mk-ha-783738
	I0217 11:51:16.167453   97897 main.go:141] libmachine: (ha-783738-m03) Calling .GetSSHPort
	I0217 11:51:16.167623   97897 main.go:141] libmachine: (ha-783738-m03) Calling .GetSSHKeyPath
	I0217 11:51:16.167775   97897 main.go:141] libmachine: (ha-783738-m03) Calling .GetSSHUsername
	I0217 11:51:16.167908   97897 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738-m03/id_rsa Username:docker}
	I0217 11:51:16.249515   97897 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0217 11:51:16.267037   97897 kubeconfig.go:125] found "ha-783738" server: "https://192.168.39.254:8443"
	I0217 11:51:16.267070   97897 api_server.go:166] Checking apiserver status ...
	I0217 11:51:16.267111   97897 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0217 11:51:16.280891   97897 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1801/cgroup
	W0217 11:51:16.289759   97897 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1801/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0217 11:51:16.289819   97897 ssh_runner.go:195] Run: ls
	I0217 11:51:16.294491   97897 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0217 11:51:16.300212   97897 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0217 11:51:16.300243   97897 status.go:463] ha-783738-m03 apiserver status = Running (err=<nil>)
	I0217 11:51:16.300254   97897 status.go:176] ha-783738-m03 status: &{Name:ha-783738-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0217 11:51:16.300277   97897 status.go:174] checking status of ha-783738-m04 ...
	I0217 11:51:16.300709   97897 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0217 11:51:16.300748   97897 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0217 11:51:16.316306   97897 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44663
	I0217 11:51:16.316733   97897 main.go:141] libmachine: () Calling .GetVersion
	I0217 11:51:16.317172   97897 main.go:141] libmachine: Using API Version  1
	I0217 11:51:16.317194   97897 main.go:141] libmachine: () Calling .SetConfigRaw
	I0217 11:51:16.317563   97897 main.go:141] libmachine: () Calling .GetMachineName
	I0217 11:51:16.317778   97897 main.go:141] libmachine: (ha-783738-m04) Calling .GetState
	I0217 11:51:16.319386   97897 status.go:371] ha-783738-m04 host status = "Running" (err=<nil>)
	I0217 11:51:16.319405   97897 host.go:66] Checking if "ha-783738-m04" exists ...
	I0217 11:51:16.319734   97897 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0217 11:51:16.319780   97897 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0217 11:51:16.334219   97897 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40611
	I0217 11:51:16.334657   97897 main.go:141] libmachine: () Calling .GetVersion
	I0217 11:51:16.335100   97897 main.go:141] libmachine: Using API Version  1
	I0217 11:51:16.335121   97897 main.go:141] libmachine: () Calling .SetConfigRaw
	I0217 11:51:16.335431   97897 main.go:141] libmachine: () Calling .GetMachineName
	I0217 11:51:16.335631   97897 main.go:141] libmachine: (ha-783738-m04) Calling .GetIP
	I0217 11:51:16.338361   97897 main.go:141] libmachine: (ha-783738-m04) DBG | domain ha-783738-m04 has defined MAC address 52:54:00:41:c1:dc in network mk-ha-783738
	I0217 11:51:16.338887   97897 main.go:141] libmachine: (ha-783738-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:c1:dc", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:50:01 +0000 UTC Type:0 Mac:52:54:00:41:c1:dc Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-783738-m04 Clientid:01:52:54:00:41:c1:dc}
	I0217 11:51:16.338915   97897 main.go:141] libmachine: (ha-783738-m04) DBG | domain ha-783738-m04 has defined IP address 192.168.39.168 and MAC address 52:54:00:41:c1:dc in network mk-ha-783738
	I0217 11:51:16.339048   97897 host.go:66] Checking if "ha-783738-m04" exists ...
	I0217 11:51:16.339367   97897 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0217 11:51:16.339414   97897 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0217 11:51:16.353988   97897 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35913
	I0217 11:51:16.354393   97897 main.go:141] libmachine: () Calling .GetVersion
	I0217 11:51:16.354778   97897 main.go:141] libmachine: Using API Version  1
	I0217 11:51:16.354796   97897 main.go:141] libmachine: () Calling .SetConfigRaw
	I0217 11:51:16.355115   97897 main.go:141] libmachine: () Calling .GetMachineName
	I0217 11:51:16.355322   97897 main.go:141] libmachine: (ha-783738-m04) Calling .DriverName
	I0217 11:51:16.355538   97897 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0217 11:51:16.355564   97897 main.go:141] libmachine: (ha-783738-m04) Calling .GetSSHHostname
	I0217 11:51:16.358207   97897 main.go:141] libmachine: (ha-783738-m04) DBG | domain ha-783738-m04 has defined MAC address 52:54:00:41:c1:dc in network mk-ha-783738
	I0217 11:51:16.358618   97897 main.go:141] libmachine: (ha-783738-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:c1:dc", ip: ""} in network mk-ha-783738: {Iface:virbr1 ExpiryTime:2025-02-17 12:50:01 +0000 UTC Type:0 Mac:52:54:00:41:c1:dc Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-783738-m04 Clientid:01:52:54:00:41:c1:dc}
	I0217 11:51:16.358656   97897 main.go:141] libmachine: (ha-783738-m04) DBG | domain ha-783738-m04 has defined IP address 192.168.39.168 and MAC address 52:54:00:41:c1:dc in network mk-ha-783738
	I0217 11:51:16.358833   97897 main.go:141] libmachine: (ha-783738-m04) Calling .GetSSHPort
	I0217 11:51:16.359031   97897 main.go:141] libmachine: (ha-783738-m04) Calling .GetSSHKeyPath
	I0217 11:51:16.359172   97897 main.go:141] libmachine: (ha-783738-m04) Calling .GetSSHUsername
	I0217 11:51:16.359319   97897 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/ha-783738-m04/id_rsa Username:docker}
	I0217 11:51:16.436733   97897 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0217 11:51:16.450916   97897 status.go:176] ha-783738-m04 status: &{Name:ha-783738-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (42.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 node start m02 -v=7 --alsologtostderr
E0217 11:51:26.153491   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/functional-576160/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-783738 node start m02 -v=7 --alsologtostderr: (41.380237606s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (42.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (244.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-783738 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-783738 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-783738 -v=7 --alsologtostderr: (41.608766933s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-783738 --wait=true -v=7 --alsologtostderr
E0217 11:52:48.076018   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/functional-576160/client.crt: no such file or directory" logger="UnhandledError"
E0217 11:53:34.520178   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/addons-603759/client.crt: no such file or directory" logger="UnhandledError"
E0217 11:55:04.216356   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/functional-576160/client.crt: no such file or directory" logger="UnhandledError"
E0217 11:55:31.918228   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/functional-576160/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-783738 --wait=true -v=7 --alsologtostderr: (3m23.106689111s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-783738
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (244.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (6.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-783738 node delete m03 -v=7 --alsologtostderr: (6.165829321s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (6.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (37.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 stop -v=7 --alsologtostderr
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-783738 stop -v=7 --alsologtostderr: (37.470709554s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-783738 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-783738 status -v=7 --alsologtostderr: exit status 7 (103.175181ms)

                                                
                                                
-- stdout --
	ha-783738
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-783738-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-783738-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0217 11:56:50.111842  100335 out.go:345] Setting OutFile to fd 1 ...
	I0217 11:56:50.111945  100335 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 11:56:50.111953  100335 out.go:358] Setting ErrFile to fd 2...
	I0217 11:56:50.111957  100335 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 11:56:50.112148  100335 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20427-77349/.minikube/bin
	I0217 11:56:50.112309  100335 out.go:352] Setting JSON to false
	I0217 11:56:50.112335  100335 mustload.go:65] Loading cluster: ha-783738
	I0217 11:56:50.112394  100335 notify.go:220] Checking for updates...
	I0217 11:56:50.112945  100335 config.go:182] Loaded profile config "ha-783738": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0217 11:56:50.112975  100335 status.go:174] checking status of ha-783738 ...
	I0217 11:56:50.113585  100335 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0217 11:56:50.113631  100335 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0217 11:56:50.129121  100335 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37945
	I0217 11:56:50.129762  100335 main.go:141] libmachine: () Calling .GetVersion
	I0217 11:56:50.130450  100335 main.go:141] libmachine: Using API Version  1
	I0217 11:56:50.130475  100335 main.go:141] libmachine: () Calling .SetConfigRaw
	I0217 11:56:50.130866  100335 main.go:141] libmachine: () Calling .GetMachineName
	I0217 11:56:50.131067  100335 main.go:141] libmachine: (ha-783738) Calling .GetState
	I0217 11:56:50.132512  100335 status.go:371] ha-783738 host status = "Stopped" (err=<nil>)
	I0217 11:56:50.132526  100335 status.go:384] host is not running, skipping remaining checks
	I0217 11:56:50.132532  100335 status.go:176] ha-783738 status: &{Name:ha-783738 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0217 11:56:50.132575  100335 status.go:174] checking status of ha-783738-m02 ...
	I0217 11:56:50.132863  100335 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0217 11:56:50.132905  100335 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0217 11:56:50.147454  100335 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37639
	I0217 11:56:50.147812  100335 main.go:141] libmachine: () Calling .GetVersion
	I0217 11:56:50.148236  100335 main.go:141] libmachine: Using API Version  1
	I0217 11:56:50.148257  100335 main.go:141] libmachine: () Calling .SetConfigRaw
	I0217 11:56:50.148536  100335 main.go:141] libmachine: () Calling .GetMachineName
	I0217 11:56:50.148698  100335 main.go:141] libmachine: (ha-783738-m02) Calling .GetState
	I0217 11:56:50.150114  100335 status.go:371] ha-783738-m02 host status = "Stopped" (err=<nil>)
	I0217 11:56:50.150132  100335 status.go:384] host is not running, skipping remaining checks
	I0217 11:56:50.150140  100335 status.go:176] ha-783738-m02 status: &{Name:ha-783738-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0217 11:56:50.150161  100335 status.go:174] checking status of ha-783738-m04 ...
	I0217 11:56:50.150437  100335 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0217 11:56:50.150474  100335 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0217 11:56:50.165049  100335 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36831
	I0217 11:56:50.165581  100335 main.go:141] libmachine: () Calling .GetVersion
	I0217 11:56:50.166095  100335 main.go:141] libmachine: Using API Version  1
	I0217 11:56:50.166132  100335 main.go:141] libmachine: () Calling .SetConfigRaw
	I0217 11:56:50.166459  100335 main.go:141] libmachine: () Calling .GetMachineName
	I0217 11:56:50.166644  100335 main.go:141] libmachine: (ha-783738-m04) Calling .GetState
	I0217 11:56:50.168226  100335 status.go:371] ha-783738-m04 host status = "Stopped" (err=<nil>)
	I0217 11:56:50.168246  100335 status.go:384] host is not running, skipping remaining checks
	I0217 11:56:50.168252  100335 status.go:176] ha-783738-m04 status: &{Name:ha-783738-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (37.57s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (50.97s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-034938 --driver=kvm2 
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-034938 --driver=kvm2 : (50.966563161s)
--- PASS: TestImageBuild/serial/Setup (50.97s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.37s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-034938
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-034938: (1.372559055s)
--- PASS: TestImageBuild/serial/NormalBuild (1.37s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.86s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-034938
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.86s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.58s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-034938
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.58s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.82s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-034938
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.82s)

                                                
                                    
x
+
TestJSONOutput/start/Command (88.45s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-278255 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 
E0217 11:59:57.593266   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/addons-603759/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:00:04.219350   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/functional-576160/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-278255 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 : (1m28.449345869s)
--- PASS: TestJSONOutput/start/Command (88.45s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.56s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-278255 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.56s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.55s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-278255 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.55s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.55s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-278255 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-278255 --output=json --user=testUser: (7.552890901s)
--- PASS: TestJSONOutput/stop/Command (7.55s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-961635 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-961635 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (66.733388ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c53723c0-508d-49ac-94f3-d9ab00a2b0f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-961635] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"af96e935-0c5e-4489-aebb-529b0e7ef3aa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20427"}}
	{"specversion":"1.0","id":"a264f14b-2035-41e1-bd3a-19bb4e6b7857","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"15411e0d-f6d3-4aa6-af09-9a31c0c231b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20427-77349/kubeconfig"}}
	{"specversion":"1.0","id":"debb2336-d21c-4b19-a7c4-7f202e27a046","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20427-77349/.minikube"}}
	{"specversion":"1.0","id":"0612375c-2e82-4a58-ae37-17cf9f00a429","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"d4e0c9b1-863e-4a02-b3c0-f6e6f3d19dce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9f518a84-5c15-4402-bffe-7bce1d8e493c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-961635" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-961635
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (103.98s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-935024 --driver=kvm2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-935024 --driver=kvm2 : (49.652589822s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-948128 --driver=kvm2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-948128 --driver=kvm2 : (51.488129613s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-935024
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-948128
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-948128" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-948128
helpers_test.go:175: Cleaning up "first-935024" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-935024
--- PASS: TestMinikubeProfile (103.98s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (31.22s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-224754 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 
E0217 12:03:34.519955   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/addons-603759/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-224754 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 : (30.224340712s)
--- PASS: TestMountStart/serial/StartWithMountFirst (31.22s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-224754 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-224754 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.19s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-250888 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-250888 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 : (27.186729457s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.19s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-250888 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-250888 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-224754 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-250888 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-250888 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-250888
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-250888: (2.282168935s)
--- PASS: TestMountStart/serial/Stop (2.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (27.8s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-250888
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-250888: (26.796874296s)
--- PASS: TestMountStart/serial/RestartStopped (27.80s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-250888 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-250888 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (133.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-989489 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 
E0217 12:05:04.217257   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/functional-576160/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:06:27.280305   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/functional-576160/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-989489 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 : (2m12.971704903s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989489 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (133.39s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-989489 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-989489 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-989489 -- rollout status deployment/busybox: (2.642917485s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-989489 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-989489 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-989489 -- exec busybox-58667487b6-pwrd7 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-989489 -- exec busybox-58667487b6-r5jfv -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-989489 -- exec busybox-58667487b6-pwrd7 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-989489 -- exec busybox-58667487b6-r5jfv -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-989489 -- exec busybox-58667487b6-pwrd7 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-989489 -- exec busybox-58667487b6-r5jfv -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.20s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-989489 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-989489 -- exec busybox-58667487b6-pwrd7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-989489 -- exec busybox-58667487b6-pwrd7 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-989489 -- exec busybox-58667487b6-r5jfv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-989489 -- exec busybox-58667487b6-r5jfv -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (58.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-989489 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-989489 -v 3 --alsologtostderr: (57.677789123s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989489 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (58.25s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-989489 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.59s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989489 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989489 cp testdata/cp-test.txt multinode-989489:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989489 ssh -n multinode-989489 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989489 cp multinode-989489:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1924003069/001/cp-test_multinode-989489.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989489 ssh -n multinode-989489 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989489 cp multinode-989489:/home/docker/cp-test.txt multinode-989489-m02:/home/docker/cp-test_multinode-989489_multinode-989489-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989489 ssh -n multinode-989489 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989489 ssh -n multinode-989489-m02 "sudo cat /home/docker/cp-test_multinode-989489_multinode-989489-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989489 cp multinode-989489:/home/docker/cp-test.txt multinode-989489-m03:/home/docker/cp-test_multinode-989489_multinode-989489-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989489 ssh -n multinode-989489 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989489 ssh -n multinode-989489-m03 "sudo cat /home/docker/cp-test_multinode-989489_multinode-989489-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989489 cp testdata/cp-test.txt multinode-989489-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989489 ssh -n multinode-989489-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989489 cp multinode-989489-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1924003069/001/cp-test_multinode-989489-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989489 ssh -n multinode-989489-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989489 cp multinode-989489-m02:/home/docker/cp-test.txt multinode-989489:/home/docker/cp-test_multinode-989489-m02_multinode-989489.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989489 ssh -n multinode-989489-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989489 ssh -n multinode-989489 "sudo cat /home/docker/cp-test_multinode-989489-m02_multinode-989489.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989489 cp multinode-989489-m02:/home/docker/cp-test.txt multinode-989489-m03:/home/docker/cp-test_multinode-989489-m02_multinode-989489-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989489 ssh -n multinode-989489-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989489 ssh -n multinode-989489-m03 "sudo cat /home/docker/cp-test_multinode-989489-m02_multinode-989489-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989489 cp testdata/cp-test.txt multinode-989489-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989489 ssh -n multinode-989489-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989489 cp multinode-989489-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1924003069/001/cp-test_multinode-989489-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989489 ssh -n multinode-989489-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989489 cp multinode-989489-m03:/home/docker/cp-test.txt multinode-989489:/home/docker/cp-test_multinode-989489-m03_multinode-989489.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989489 ssh -n multinode-989489-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989489 ssh -n multinode-989489 "sudo cat /home/docker/cp-test_multinode-989489-m03_multinode-989489.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989489 cp multinode-989489-m03:/home/docker/cp-test.txt multinode-989489-m02:/home/docker/cp-test_multinode-989489-m03_multinode-989489-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989489 ssh -n multinode-989489-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989489 ssh -n multinode-989489-m02 "sudo cat /home/docker/cp-test_multinode-989489-m03_multinode-989489-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.41s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989489 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-989489 node stop m03: (2.520493463s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989489 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-989489 status: exit status 7 (430.092477ms)

                                                
                                                
-- stdout --
	multinode-989489
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-989489-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-989489-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989489 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-989489 status --alsologtostderr: exit status 7 (426.400655ms)

                                                
                                                
-- stdout --
	multinode-989489
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-989489-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-989489-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0217 12:08:08.414582  108409 out.go:345] Setting OutFile to fd 1 ...
	I0217 12:08:08.414710  108409 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 12:08:08.414720  108409 out.go:358] Setting ErrFile to fd 2...
	I0217 12:08:08.414726  108409 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 12:08:08.414935  108409 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20427-77349/.minikube/bin
	I0217 12:08:08.415123  108409 out.go:352] Setting JSON to false
	I0217 12:08:08.415158  108409 mustload.go:65] Loading cluster: multinode-989489
	I0217 12:08:08.415255  108409 notify.go:220] Checking for updates...
	I0217 12:08:08.415566  108409 config.go:182] Loaded profile config "multinode-989489": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0217 12:08:08.415590  108409 status.go:174] checking status of multinode-989489 ...
	I0217 12:08:08.416010  108409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0217 12:08:08.416051  108409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0217 12:08:08.436717  108409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37351
	I0217 12:08:08.437261  108409 main.go:141] libmachine: () Calling .GetVersion
	I0217 12:08:08.437845  108409 main.go:141] libmachine: Using API Version  1
	I0217 12:08:08.437877  108409 main.go:141] libmachine: () Calling .SetConfigRaw
	I0217 12:08:08.438317  108409 main.go:141] libmachine: () Calling .GetMachineName
	I0217 12:08:08.438523  108409 main.go:141] libmachine: (multinode-989489) Calling .GetState
	I0217 12:08:08.440152  108409 status.go:371] multinode-989489 host status = "Running" (err=<nil>)
	I0217 12:08:08.440170  108409 host.go:66] Checking if "multinode-989489" exists ...
	I0217 12:08:08.440447  108409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0217 12:08:08.440483  108409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0217 12:08:08.454881  108409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42825
	I0217 12:08:08.455297  108409 main.go:141] libmachine: () Calling .GetVersion
	I0217 12:08:08.455737  108409 main.go:141] libmachine: Using API Version  1
	I0217 12:08:08.455757  108409 main.go:141] libmachine: () Calling .SetConfigRaw
	I0217 12:08:08.456064  108409 main.go:141] libmachine: () Calling .GetMachineName
	I0217 12:08:08.456253  108409 main.go:141] libmachine: (multinode-989489) Calling .GetIP
	I0217 12:08:08.459003  108409 main.go:141] libmachine: (multinode-989489) DBG | domain multinode-989489 has defined MAC address 52:54:00:d7:4b:ca in network mk-multinode-989489
	I0217 12:08:08.459361  108409 main.go:141] libmachine: (multinode-989489) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:4b:ca", ip: ""} in network mk-multinode-989489: {Iface:virbr1 ExpiryTime:2025-02-17 13:04:55 +0000 UTC Type:0 Mac:52:54:00:d7:4b:ca Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-989489 Clientid:01:52:54:00:d7:4b:ca}
	I0217 12:08:08.459385  108409 main.go:141] libmachine: (multinode-989489) DBG | domain multinode-989489 has defined IP address 192.168.39.26 and MAC address 52:54:00:d7:4b:ca in network mk-multinode-989489
	I0217 12:08:08.459504  108409 host.go:66] Checking if "multinode-989489" exists ...
	I0217 12:08:08.459806  108409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0217 12:08:08.459855  108409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0217 12:08:08.474419  108409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34147
	I0217 12:08:08.474932  108409 main.go:141] libmachine: () Calling .GetVersion
	I0217 12:08:08.475517  108409 main.go:141] libmachine: Using API Version  1
	I0217 12:08:08.475543  108409 main.go:141] libmachine: () Calling .SetConfigRaw
	I0217 12:08:08.475863  108409 main.go:141] libmachine: () Calling .GetMachineName
	I0217 12:08:08.476036  108409 main.go:141] libmachine: (multinode-989489) Calling .DriverName
	I0217 12:08:08.476273  108409 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0217 12:08:08.476297  108409 main.go:141] libmachine: (multinode-989489) Calling .GetSSHHostname
	I0217 12:08:08.479078  108409 main.go:141] libmachine: (multinode-989489) DBG | domain multinode-989489 has defined MAC address 52:54:00:d7:4b:ca in network mk-multinode-989489
	I0217 12:08:08.479508  108409 main.go:141] libmachine: (multinode-989489) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:4b:ca", ip: ""} in network mk-multinode-989489: {Iface:virbr1 ExpiryTime:2025-02-17 13:04:55 +0000 UTC Type:0 Mac:52:54:00:d7:4b:ca Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-989489 Clientid:01:52:54:00:d7:4b:ca}
	I0217 12:08:08.479543  108409 main.go:141] libmachine: (multinode-989489) DBG | domain multinode-989489 has defined IP address 192.168.39.26 and MAC address 52:54:00:d7:4b:ca in network mk-multinode-989489
	I0217 12:08:08.479646  108409 main.go:141] libmachine: (multinode-989489) Calling .GetSSHPort
	I0217 12:08:08.479791  108409 main.go:141] libmachine: (multinode-989489) Calling .GetSSHKeyPath
	I0217 12:08:08.479927  108409 main.go:141] libmachine: (multinode-989489) Calling .GetSSHUsername
	I0217 12:08:08.480021  108409 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/multinode-989489/id_rsa Username:docker}
	I0217 12:08:08.564635  108409 ssh_runner.go:195] Run: systemctl --version
	I0217 12:08:08.569986  108409 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0217 12:08:08.583467  108409 kubeconfig.go:125] found "multinode-989489" server: "https://192.168.39.26:8443"
	I0217 12:08:08.583504  108409 api_server.go:166] Checking apiserver status ...
	I0217 12:08:08.583539  108409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0217 12:08:08.595989  108409 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1845/cgroup
	W0217 12:08:08.604759  108409 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1845/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0217 12:08:08.604803  108409 ssh_runner.go:195] Run: ls
	I0217 12:08:08.608654  108409 api_server.go:253] Checking apiserver healthz at https://192.168.39.26:8443/healthz ...
	I0217 12:08:08.612964  108409 api_server.go:279] https://192.168.39.26:8443/healthz returned 200:
	ok
	I0217 12:08:08.612986  108409 status.go:463] multinode-989489 apiserver status = Running (err=<nil>)
	I0217 12:08:08.612998  108409 status.go:176] multinode-989489 status: &{Name:multinode-989489 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0217 12:08:08.613031  108409 status.go:174] checking status of multinode-989489-m02 ...
	I0217 12:08:08.613346  108409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0217 12:08:08.613393  108409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0217 12:08:08.628342  108409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36267
	I0217 12:08:08.628839  108409 main.go:141] libmachine: () Calling .GetVersion
	I0217 12:08:08.629437  108409 main.go:141] libmachine: Using API Version  1
	I0217 12:08:08.629463  108409 main.go:141] libmachine: () Calling .SetConfigRaw
	I0217 12:08:08.629798  108409 main.go:141] libmachine: () Calling .GetMachineName
	I0217 12:08:08.629998  108409 main.go:141] libmachine: (multinode-989489-m02) Calling .GetState
	I0217 12:08:08.631488  108409 status.go:371] multinode-989489-m02 host status = "Running" (err=<nil>)
	I0217 12:08:08.631502  108409 host.go:66] Checking if "multinode-989489-m02" exists ...
	I0217 12:08:08.631806  108409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0217 12:08:08.631849  108409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0217 12:08:08.646312  108409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33477
	I0217 12:08:08.646691  108409 main.go:141] libmachine: () Calling .GetVersion
	I0217 12:08:08.647102  108409 main.go:141] libmachine: Using API Version  1
	I0217 12:08:08.647121  108409 main.go:141] libmachine: () Calling .SetConfigRaw
	I0217 12:08:08.647393  108409 main.go:141] libmachine: () Calling .GetMachineName
	I0217 12:08:08.647569  108409 main.go:141] libmachine: (multinode-989489-m02) Calling .GetIP
	I0217 12:08:08.650315  108409 main.go:141] libmachine: (multinode-989489-m02) DBG | domain multinode-989489-m02 has defined MAC address 52:54:00:49:30:3d in network mk-multinode-989489
	I0217 12:08:08.650708  108409 main.go:141] libmachine: (multinode-989489-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:30:3d", ip: ""} in network mk-multinode-989489: {Iface:virbr1 ExpiryTime:2025-02-17 13:06:10 +0000 UTC Type:0 Mac:52:54:00:49:30:3d Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-989489-m02 Clientid:01:52:54:00:49:30:3d}
	I0217 12:08:08.650736  108409 main.go:141] libmachine: (multinode-989489-m02) DBG | domain multinode-989489-m02 has defined IP address 192.168.39.146 and MAC address 52:54:00:49:30:3d in network mk-multinode-989489
	I0217 12:08:08.650900  108409 host.go:66] Checking if "multinode-989489-m02" exists ...
	I0217 12:08:08.651267  108409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0217 12:08:08.651313  108409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0217 12:08:08.666104  108409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39279
	I0217 12:08:08.666564  108409 main.go:141] libmachine: () Calling .GetVersion
	I0217 12:08:08.667025  108409 main.go:141] libmachine: Using API Version  1
	I0217 12:08:08.667045  108409 main.go:141] libmachine: () Calling .SetConfigRaw
	I0217 12:08:08.667409  108409 main.go:141] libmachine: () Calling .GetMachineName
	I0217 12:08:08.667560  108409 main.go:141] libmachine: (multinode-989489-m02) Calling .DriverName
	I0217 12:08:08.667726  108409 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0217 12:08:08.667753  108409 main.go:141] libmachine: (multinode-989489-m02) Calling .GetSSHHostname
	I0217 12:08:08.670451  108409 main.go:141] libmachine: (multinode-989489-m02) DBG | domain multinode-989489-m02 has defined MAC address 52:54:00:49:30:3d in network mk-multinode-989489
	I0217 12:08:08.670881  108409 main.go:141] libmachine: (multinode-989489-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:30:3d", ip: ""} in network mk-multinode-989489: {Iface:virbr1 ExpiryTime:2025-02-17 13:06:10 +0000 UTC Type:0 Mac:52:54:00:49:30:3d Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-989489-m02 Clientid:01:52:54:00:49:30:3d}
	I0217 12:08:08.670921  108409 main.go:141] libmachine: (multinode-989489-m02) DBG | domain multinode-989489-m02 has defined IP address 192.168.39.146 and MAC address 52:54:00:49:30:3d in network mk-multinode-989489
	I0217 12:08:08.671062  108409 main.go:141] libmachine: (multinode-989489-m02) Calling .GetSSHPort
	I0217 12:08:08.671177  108409 main.go:141] libmachine: (multinode-989489-m02) Calling .GetSSHKeyPath
	I0217 12:08:08.671355  108409 main.go:141] libmachine: (multinode-989489-m02) Calling .GetSSHUsername
	I0217 12:08:08.671492  108409 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20427-77349/.minikube/machines/multinode-989489-m02/id_rsa Username:docker}
	I0217 12:08:08.760043  108409 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0217 12:08:08.773566  108409 status.go:176] multinode-989489-m02 status: &{Name:multinode-989489-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0217 12:08:08.773613  108409 status.go:174] checking status of multinode-989489-m03 ...
	I0217 12:08:08.774064  108409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0217 12:08:08.774112  108409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0217 12:08:08.789658  108409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41783
	I0217 12:08:08.790093  108409 main.go:141] libmachine: () Calling .GetVersion
	I0217 12:08:08.790619  108409 main.go:141] libmachine: Using API Version  1
	I0217 12:08:08.790644  108409 main.go:141] libmachine: () Calling .SetConfigRaw
	I0217 12:08:08.790937  108409 main.go:141] libmachine: () Calling .GetMachineName
	I0217 12:08:08.791142  108409 main.go:141] libmachine: (multinode-989489-m03) Calling .GetState
	I0217 12:08:08.792607  108409 status.go:371] multinode-989489-m03 host status = "Stopped" (err=<nil>)
	I0217 12:08:08.792626  108409 status.go:384] host is not running, skipping remaining checks
	I0217 12:08:08.792633  108409 status.go:176] multinode-989489-m03 status: &{Name:multinode-989489-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.38s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (42.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989489 node start m03 -v=7 --alsologtostderr
E0217 12:08:34.519611   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/addons-603759/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-989489 node start m03 -v=7 --alsologtostderr: (41.465191735s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989489 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (42.09s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (174.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-989489
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-989489
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-989489: (27.29050726s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-989489 --wait=true -v=8 --alsologtostderr
E0217 12:10:04.214120   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/functional-576160/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-989489 --wait=true -v=8 --alsologtostderr: (2m26.845304729s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-989489
--- PASS: TestMultiNode/serial/RestartKeepsNodes (174.24s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989489 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-989489 node delete m03: (1.722352963s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989489 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.25s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (25.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989489 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-989489 stop: (24.983719284s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989489 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-989489 status: exit status 7 (89.69499ms)

                                                
                                                
-- stdout --
	multinode-989489
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-989489-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989489 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-989489 status --alsologtostderr: exit status 7 (86.512204ms)

                                                
                                                
-- stdout --
	multinode-989489
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-989489-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0217 12:12:12.500020  110161 out.go:345] Setting OutFile to fd 1 ...
	I0217 12:12:12.500156  110161 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 12:12:12.500168  110161 out.go:358] Setting ErrFile to fd 2...
	I0217 12:12:12.500175  110161 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 12:12:12.500377  110161 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20427-77349/.minikube/bin
	I0217 12:12:12.500553  110161 out.go:352] Setting JSON to false
	I0217 12:12:12.500591  110161 mustload.go:65] Loading cluster: multinode-989489
	I0217 12:12:12.500709  110161 notify.go:220] Checking for updates...
	I0217 12:12:12.501055  110161 config.go:182] Loaded profile config "multinode-989489": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0217 12:12:12.501079  110161 status.go:174] checking status of multinode-989489 ...
	I0217 12:12:12.501579  110161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0217 12:12:12.501623  110161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0217 12:12:12.516320  110161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43461
	I0217 12:12:12.516809  110161 main.go:141] libmachine: () Calling .GetVersion
	I0217 12:12:12.517454  110161 main.go:141] libmachine: Using API Version  1
	I0217 12:12:12.517476  110161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0217 12:12:12.517853  110161 main.go:141] libmachine: () Calling .GetMachineName
	I0217 12:12:12.518107  110161 main.go:141] libmachine: (multinode-989489) Calling .GetState
	I0217 12:12:12.519932  110161 status.go:371] multinode-989489 host status = "Stopped" (err=<nil>)
	I0217 12:12:12.519950  110161 status.go:384] host is not running, skipping remaining checks
	I0217 12:12:12.519957  110161 status.go:176] multinode-989489 status: &{Name:multinode-989489 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0217 12:12:12.519996  110161 status.go:174] checking status of multinode-989489-m02 ...
	I0217 12:12:12.520327  110161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0217 12:12:12.520369  110161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0217 12:12:12.535160  110161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40189
	I0217 12:12:12.535578  110161 main.go:141] libmachine: () Calling .GetVersion
	I0217 12:12:12.536092  110161 main.go:141] libmachine: Using API Version  1
	I0217 12:12:12.536116  110161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0217 12:12:12.536439  110161 main.go:141] libmachine: () Calling .GetMachineName
	I0217 12:12:12.536618  110161 main.go:141] libmachine: (multinode-989489-m02) Calling .GetState
	I0217 12:12:12.538048  110161 status.go:371] multinode-989489-m02 host status = "Stopped" (err=<nil>)
	I0217 12:12:12.538065  110161 status.go:384] host is not running, skipping remaining checks
	I0217 12:12:12.538072  110161 status.go:176] multinode-989489-m02 status: &{Name:multinode-989489-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (25.16s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (120.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-989489 --wait=true -v=8 --alsologtostderr --driver=kvm2 
E0217 12:13:34.520185   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/addons-603759/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-989489 --wait=true -v=8 --alsologtostderr --driver=kvm2 : (2m0.033330797s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989489 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (120.55s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (52.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-989489
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-989489-m02 --driver=kvm2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-989489-m02 --driver=kvm2 : exit status 14 (66.291757ms)

                                                
                                                
-- stdout --
	* [multinode-989489-m02] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20427
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20427-77349/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20427-77349/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-989489-m02' is duplicated with machine name 'multinode-989489-m02' in profile 'multinode-989489'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-989489-m03 --driver=kvm2 
E0217 12:15:04.213976   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/functional-576160/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-989489-m03 --driver=kvm2 : (51.822772149s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-989489
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-989489: exit status 80 (218.513312ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-989489 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-989489-m03 already exists in multinode-989489-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-989489-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (52.95s)

                                                
                                    
x
+
TestPreload (150.88s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-850392 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-850392 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4: (1m21.283745067s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-850392 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-850392 image pull gcr.io/k8s-minikube/busybox: (1.489272745s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-850392
E0217 12:16:37.597415   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/addons-603759/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-850392: (12.58528182s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-850392 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-850392 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 : (54.500351985s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-850392 image list
helpers_test.go:175: Cleaning up "test-preload-850392" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-850392
--- PASS: TestPreload (150.88s)

                                                
                                    
x
+
TestScheduledStopUnix (119.38s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-762527 --memory=2048 --driver=kvm2 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-762527 --memory=2048 --driver=kvm2 : (47.721228974s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-762527 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-762527 -n scheduled-stop-762527
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-762527 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0217 12:18:26.566769   84502 retry.go:31] will retry after 91.346µs: open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/scheduled-stop-762527/pid: no such file or directory
I0217 12:18:26.567936   84502 retry.go:31] will retry after 147.631µs: open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/scheduled-stop-762527/pid: no such file or directory
I0217 12:18:26.569109   84502 retry.go:31] will retry after 210.953µs: open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/scheduled-stop-762527/pid: no such file or directory
I0217 12:18:26.570259   84502 retry.go:31] will retry after 494.07µs: open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/scheduled-stop-762527/pid: no such file or directory
I0217 12:18:26.571416   84502 retry.go:31] will retry after 646.743µs: open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/scheduled-stop-762527/pid: no such file or directory
I0217 12:18:26.572547   84502 retry.go:31] will retry after 730.291µs: open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/scheduled-stop-762527/pid: no such file or directory
I0217 12:18:26.573693   84502 retry.go:31] will retry after 1.199847ms: open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/scheduled-stop-762527/pid: no such file or directory
I0217 12:18:26.575914   84502 retry.go:31] will retry after 2.175792ms: open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/scheduled-stop-762527/pid: no such file or directory
I0217 12:18:26.579132   84502 retry.go:31] will retry after 3.547838ms: open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/scheduled-stop-762527/pid: no such file or directory
I0217 12:18:26.583334   84502 retry.go:31] will retry after 3.9194ms: open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/scheduled-stop-762527/pid: no such file or directory
I0217 12:18:26.587553   84502 retry.go:31] will retry after 4.379939ms: open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/scheduled-stop-762527/pid: no such file or directory
I0217 12:18:26.592766   84502 retry.go:31] will retry after 8.368358ms: open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/scheduled-stop-762527/pid: no such file or directory
I0217 12:18:26.601979   84502 retry.go:31] will retry after 9.527374ms: open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/scheduled-stop-762527/pid: no such file or directory
I0217 12:18:26.612195   84502 retry.go:31] will retry after 21.624864ms: open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/scheduled-stop-762527/pid: no such file or directory
I0217 12:18:26.634503   84502 retry.go:31] will retry after 31.700409ms: open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/scheduled-stop-762527/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-762527 --cancel-scheduled
E0217 12:18:34.519928   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/addons-603759/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-762527 -n scheduled-stop-762527
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-762527
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-762527 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-762527
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-762527: exit status 7 (73.381759ms)

                                                
                                                
-- stdout --
	scheduled-stop-762527
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-762527 -n scheduled-stop-762527
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-762527 -n scheduled-stop-762527: exit status 7 (66.705646ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-762527" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-762527
--- PASS: TestScheduledStopUnix (119.38s)

                                                
                                    
x
+
TestSkaffold (127.66s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe174039230 version
skaffold_test.go:63: skaffold version: v2.14.1
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-154700 --memory=2600 --driver=kvm2 
E0217 12:20:04.222871   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/functional-576160/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-154700 --memory=2600 --driver=kvm2 : (49.738899497s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe174039230 run --minikube-profile skaffold-154700 --kube-context skaffold-154700 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe174039230 run --minikube-profile skaffold-154700 --kube-context skaffold-154700 --status-check=true --port-forward=false --interactive=false: (1m5.178303068s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-5f8fc97978-5zsfm" [3900b1e6-604d-41d6-a425-c6310918de70] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.004080957s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-595475bd4c-bdpm4" [201e5002-ee56-420a-b978-70f10f9f0ad9] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003178197s
helpers_test.go:175: Cleaning up "skaffold-154700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-154700
--- PASS: TestSkaffold (127.66s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (174.36s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.480623773 start -p running-upgrade-676930 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.480623773 start -p running-upgrade-676930 --memory=2200 --vm-driver=kvm2 : (2m12.327673068s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-676930 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-676930 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (40.47087313s)
helpers_test.go:175: Cleaning up "running-upgrade-676930" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-676930
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-676930: (1.135837746s)
--- PASS: TestRunningBinaryUpgrade (174.36s)

                                                
                                    
x
+
TestKubernetesUpgrade (264.56s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-105939 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-105939 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2 : (1m52.561481663s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-105939
E0217 12:26:33.530782   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/skaffold-154700/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:26:33.537186   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/skaffold-154700/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:26:33.548654   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/skaffold-154700/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:26:33.570125   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/skaffold-154700/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:26:33.611553   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/skaffold-154700/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:26:33.693037   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/skaffold-154700/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:26:33.854579   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/skaffold-154700/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:26:34.176361   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/skaffold-154700/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:26:34.818548   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/skaffold-154700/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-105939: (3.288386808s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-105939 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-105939 status --format={{.Host}}: exit status 7 (63.884343ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-105939 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2 
E0217 12:26:36.100868   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/skaffold-154700/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:26:38.663845   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/skaffold-154700/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:26:43.786246   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/skaffold-154700/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:26:54.027728   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/skaffold-154700/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:27:14.509069   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/skaffold-154700/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-105939 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2 : (1m31.787913367s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-105939 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-105939 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-105939 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2 : exit status 106 (104.32305ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-105939] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20427
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20427-77349/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20427-77349/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-105939
	    minikube start -p kubernetes-upgrade-105939 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1059392 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.1, by running:
	    
	    minikube start -p kubernetes-upgrade-105939 --kubernetes-version=v1.32.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-105939 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-105939 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2 : (55.66105159s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-105939" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-105939
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-105939: (1.029964986s)
--- PASS: TestKubernetesUpgrade (264.56s)

                                                
                                    
x
+
TestPause/serial/Start (64.09s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-688995 --memory=2048 --install-addons=false --wait=all --driver=kvm2 
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-688995 --memory=2048 --install-addons=false --wait=all --driver=kvm2 : (1m4.089266059s)
--- PASS: TestPause/serial/Start (64.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-547674 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-547674 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 : exit status 14 (65.269228ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-547674] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20427
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20427-77349/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20427-77349/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (96.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-547674 --driver=kvm2 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-547674 --driver=kvm2 : (1m36.060416763s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-547674 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (96.32s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (78.23s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-688995 --alsologtostderr -v=1 --driver=kvm2 
E0217 12:23:07.282245   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/functional-576160/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-688995 --alsologtostderr -v=1 --driver=kvm2 : (1m18.212002387s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (78.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (45.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-547674 --no-kubernetes --driver=kvm2 
E0217 12:23:34.519842   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/addons-603759/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-547674 --no-kubernetes --driver=kvm2 : (43.977561079s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-547674 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-547674 status -o json: exit status 2 (366.597438ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-547674","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-547674
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-547674: (1.432524834s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (45.78s)

                                                
                                    
x
+
TestPause/serial/Pause (0.63s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-688995 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.63s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.28s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-688995 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-688995 --output=json --layout=cluster: exit status 2 (275.684866ms)

                                                
                                                
-- stdout --
	{"Name":"pause-688995","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-688995","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.28s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.76s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-688995 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.76s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.97s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-688995 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.97s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.92s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-688995 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-688995 --alsologtostderr -v=5: (1.915921493s)
--- PASS: TestPause/serial/DeletePaused (1.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (32.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-547674 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-547674 --no-kubernetes --driver=kvm2 : (32.203609263s)
--- PASS: TestNoKubernetes/serial/Start (32.20s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (4.49s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (4.486188667s)
--- PASS: TestPause/serial/VerifyDeletedResources (4.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-547674 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-547674 "sudo systemctl is-active --quiet service kubelet": exit status 1 (187.838859ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.56s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-547674
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-547674: (2.293743542s)
--- PASS: TestNoKubernetes/serial/Stop (2.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (96.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-547674 --driver=kvm2 
E0217 12:25:04.214595   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/functional-576160/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-547674 --driver=kvm2 : (1m36.746030704s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (96.75s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.43s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.43s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (146.67s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.577202542 start -p stopped-upgrade-467689 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.577202542 start -p stopped-upgrade-467689 --memory=2200 --vm-driver=kvm2 : (1m10.874865374s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.577202542 -p stopped-upgrade-467689 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.577202542 -p stopped-upgrade-467689 stop: (12.483144514s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-467689 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-467689 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m3.314962596s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (146.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-547674 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-547674 "sudo systemctl is-active --quiet service kubelet": exit status 1 (209.562597ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.06s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-467689
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-467689: (1.059313166s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (92.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-671228 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-671228 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 : (1m32.602306451s)
--- PASS: TestNetworkPlugins/group/auto/Start (92.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (102.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-671228 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 
E0217 12:29:17.392876   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/skaffold-154700/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-671228 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 : (1m42.658455557s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (102.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (120.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-671228 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 
E0217 12:30:04.214551   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/functional-576160/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-671228 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 : (2m0.842086946s)
--- PASS: TestNetworkPlugins/group/calico/Start (120.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-671228 "pgrep -a kubelet"
I0217 12:30:37.463523   84502 config.go:182] Loaded profile config "auto-671228": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-671228 replace --force -f testdata/netcat-deployment.yaml
I0217 12:30:38.404418   84502 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-2n6ll" [e0b924e0-492e-46ed-9557-ba9caff935a7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-2n6ll" [e0b924e0-492e-46ed-9557-ba9caff935a7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004674916s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-671228 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-671228 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-671228 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-vrd6t" [054e13bb-88bb-42c9-9554-36f9bfa5f37a] Running
E0217 12:30:51.228918   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/gvisor-061450/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:30:51.871095   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/gvisor-061450/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:30:53.153498   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/gvisor-061450/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:30:55.715724   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/gvisor-061450/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004282375s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-671228 "pgrep -a kubelet"
I0217 12:30:57.334991   84502 config.go:182] Loaded profile config "kindnet-671228": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-671228 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-wqb29" [89b41833-227c-4f26-9e66-3296a59a7dc1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0217 12:31:00.837511   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/gvisor-061450/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-wqb29" [89b41833-227c-4f26-9e66-3296a59a7dc1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.004504938s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (72.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-671228 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-671228 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 : (1m12.364921777s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (72.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-671228 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-671228 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-671228 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (95.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-671228 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-671228 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 : (1m35.522512372s)
--- PASS: TestNetworkPlugins/group/false/Start (95.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (110.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-671228 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 
E0217 12:31:31.562016   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/gvisor-061450/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:31:33.530976   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/skaffold-154700/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-671228 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 : (1m50.511153585s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (110.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-k644m" [b1fc4cc5-b342-45cd-bcf6-47c06353a509] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003749113s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-671228 "pgrep -a kubelet"
I0217 12:31:47.197616   84502 config.go:182] Loaded profile config "calico-671228": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-671228 replace --force -f testdata/netcat-deployment.yaml
I0217 12:31:48.396367   84502 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-64dbz" [26836208-4b1c-4fec-ade2-33ad6dcf5ab1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-64dbz" [26836208-4b1c-4fec-ade2-33ad6dcf5ab1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.003832784s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-671228 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-671228 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-671228 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (92.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-671228 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-671228 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 : (1m32.777100066s)
--- PASS: TestNetworkPlugins/group/flannel/Start (92.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-671228 "pgrep -a kubelet"
I0217 12:32:20.481879   84502 config.go:182] Loaded profile config "custom-flannel-671228": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-671228 replace --force -f testdata/netcat-deployment.yaml
I0217 12:32:21.308477   84502 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-n7jsm" [71ae2e18-00a6-44db-b21b-27fa4a4df533] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-n7jsm" [71ae2e18-00a6-44db-b21b-27fa4a4df533] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004131948s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-671228 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-671228 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-671228 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-671228 "pgrep -a kubelet"
I0217 12:32:50.963214   84502 config.go:182] Loaded profile config "false-671228": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-671228 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-dctk9" [080edb06-617c-403c-9d92-113730111021] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-dctk9" [080edb06-617c-403c-9d92-113730111021] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 10.003114065s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (111.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-671228 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-671228 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 : (1m51.713807307s)
--- PASS: TestNetworkPlugins/group/bridge/Start (111.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-671228 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-671228 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-671228 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-671228 "pgrep -a kubelet"
I0217 12:33:17.215325   84502 config.go:182] Loaded profile config "enable-default-cni-671228": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-671228 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-qmvjp" [355aee55-b1db-4d5a-8f5c-0b9cc2cc22fc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0217 12:33:17.598871   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/addons-603759/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-qmvjp" [355aee55-b1db-4d5a-8f5c-0b9cc2cc22fc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.003113291s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (81.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-671228 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-671228 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 : (1m21.866538838s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (81.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-671228 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-671228 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-671228 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (198.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-332023 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-332023 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0: (3m18.314013034s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (198.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-knk9n" [d65dbd5b-17e2-4f26-83cf-3ac0db2fd1da] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00443427s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-671228 "pgrep -a kubelet"
I0217 12:33:55.659205   84502 config.go:182] Loaded profile config "flannel-671228": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-671228 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-9rvr2" [05a0e7fb-4b51-4e0d-bc07-500767584307] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-9rvr2" [05a0e7fb-4b51-4e0d-bc07-500767584307] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.003816611s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-671228 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-671228 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-671228 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (83.46s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-996039 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-996039 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.32.1: (1m23.459629111s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (83.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-671228 "pgrep -a kubelet"
I0217 12:34:42.374723   84502 config.go:182] Loaded profile config "kubenet-671228": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (11.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-671228 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-b6xwx" [59f3351c-b6c9-4665-828e-3121e25f504f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-b6xwx" [59f3351c-b6c9-4665-828e-3121e25f504f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 11.004069749s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (11.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-671228 "pgrep -a kubelet"
I0217 12:34:43.628903   84502 config.go:182] Loaded profile config "bridge-671228": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (13.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-671228 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-gkzgt" [6083ec30-a04b-4b47-baec-2ec501d3ce2c] Pending
helpers_test.go:344: "netcat-5d86dc444-gkzgt" [6083ec30-a04b-4b47-baec-2ec501d3ce2c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-gkzgt" [6083ec30-a04b-4b47-baec-2ec501d3ce2c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 13.004344925s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (13.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-671228 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-671228 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-671228 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-671228 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-671228 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-671228 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)
E0217 12:42:08.434132   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/calico-671228/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (66.6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-004074 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-004074 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.32.1: (1m6.600574254s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (66.60s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (93.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-467431 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.32.1
E0217 12:35:38.391448   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/auto-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:35:38.397911   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/auto-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:35:38.409342   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/auto-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:35:38.430867   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/auto-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:35:38.472318   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/auto-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:35:38.553775   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/auto-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:35:38.715346   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/auto-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:35:39.037575   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/auto-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:35:39.679187   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/auto-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:35:40.961189   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/auto-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:35:43.523358   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/auto-671228/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-467431 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.32.1: (1m33.841575764s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (93.84s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (12.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-996039 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9f16969b-6f17-440a-9535-220079331b9f] Pending
E0217 12:35:48.644659   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/auto-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:35:50.583467   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/gvisor-061450/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:35:51.055342   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/kindnet-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:35:51.061785   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/kindnet-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:35:51.073252   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/kindnet-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:35:51.094736   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/kindnet-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:35:51.136356   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/kindnet-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:35:51.217921   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/kindnet-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:35:51.379882   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/kindnet-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:35:51.701499   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/kindnet-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:35:52.342827   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/kindnet-671228/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [9f16969b-6f17-440a-9535-220079331b9f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9f16969b-6f17-440a-9535-220079331b9f] Running
E0217 12:35:53.624990   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/kindnet-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:35:56.186563   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/kindnet-671228/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 12.003976812s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-996039 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (12.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-996039 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0217 12:35:58.886850   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/auto-671228/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-996039 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (13.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-996039 --alsologtostderr -v=3
E0217 12:36:01.308699   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/kindnet-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:36:11.550429   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/kindnet-671228/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-996039 --alsologtostderr -v=3: (13.356490016s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (13.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-996039 -n no-preload-996039
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-996039 -n no-preload-996039: exit status 7 (78.0725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-996039 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (294s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-996039 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-996039 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.32.1: (4m53.701351203s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-996039 -n no-preload-996039
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (294.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-004074 create -f testdata/busybox.yaml
E0217 12:36:18.288411   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/gvisor-061450/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [28d3f041-06bb-4585-b860-dfeeeea58bd8] Pending
E0217 12:36:19.368898   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/auto-671228/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [28d3f041-06bb-4585-b860-dfeeeea58bd8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [28d3f041-06bb-4585-b860-dfeeeea58bd8] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004426544s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-004074 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-004074 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-004074 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-004074 --alsologtostderr -v=3
E0217 12:36:32.031857   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/kindnet-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:36:33.530716   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/skaffold-154700/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:36:40.729269   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/calico-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:36:40.735691   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/calico-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:36:40.747323   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/calico-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:36:40.768861   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/calico-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:36:40.810455   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/calico-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:36:40.891946   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/calico-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:36:41.054006   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/calico-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:36:41.377110   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/calico-671228/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-004074 --alsologtostderr -v=3: (13.352384447s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-004074 -n embed-certs-004074
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-004074 -n embed-certs-004074: exit status 7 (83.485946ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-004074 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0217 12:36:42.019398   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/calico-671228/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (297.65s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-004074 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.32.1
E0217 12:36:43.301583   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/calico-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:36:45.863291   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/calico-671228/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-004074 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.32.1: (4m57.382404899s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-004074 -n embed-certs-004074
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (297.65s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-467431 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a263dd4b-fa1a-44f5-a179-c1c824a1fe49] Pending
helpers_test.go:344: "busybox" [a263dd4b-fa1a-44f5-a179-c1c824a1fe49] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0217 12:36:50.985159   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/calico-671228/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [a263dd4b-fa1a-44f5-a179-c1c824a1fe49] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003270578s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-467431 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-467431 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-467431 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-467431 --alsologtostderr -v=3
E0217 12:37:00.330678   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/auto-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:37:01.226493   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/calico-671228/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-467431 --alsologtostderr -v=3: (13.344232064s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-332023 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c1d33b8d-108e-421a-a652-b0d24f24e6b2] Pending
helpers_test.go:344: "busybox" [c1d33b8d-108e-421a-a652-b0d24f24e6b2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c1d33b8d-108e-421a-a652-b0d24f24e6b2] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003537894s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-332023 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.51s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-467431 -n default-k8s-diff-port-467431
E0217 12:37:12.993985   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/kindnet-671228/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-467431 -n default-k8s-diff-port-467431: exit status 7 (81.709997ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-467431 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (299.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-467431 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-467431 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.32.1: (4m59.282590879s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-467431 -n default-k8s-diff-port-467431
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (299.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-332023 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-332023 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.003368318s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-332023 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-332023 --alsologtostderr -v=3
E0217 12:37:21.285632   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/custom-flannel-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:37:21.292014   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/custom-flannel-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:37:21.303457   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/custom-flannel-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:37:21.324873   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/custom-flannel-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:37:21.366347   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/custom-flannel-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:37:21.447814   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/custom-flannel-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:37:21.609744   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/custom-flannel-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:37:21.708311   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/calico-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:37:21.931878   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/custom-flannel-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:37:22.573259   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/custom-flannel-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:37:23.855428   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/custom-flannel-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:37:26.417164   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/custom-flannel-671228/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-332023 --alsologtostderr -v=3: (13.353210335s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-332023 -n old-k8s-version-332023
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-332023 -n old-k8s-version-332023: exit status 7 (75.593548ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-332023 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (396.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-332023 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0
E0217 12:37:31.539465   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/custom-flannel-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:37:41.780999   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/custom-flannel-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:37:51.238286   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/false-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:37:51.244765   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/false-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:37:51.256199   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/false-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:37:51.277691   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/false-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:37:51.319355   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/false-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:37:51.400914   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/false-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:37:51.562537   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/false-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:37:51.884501   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/false-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:37:52.526295   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/false-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:37:53.808511   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/false-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:37:56.370174   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/false-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:38:01.492468   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/false-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:38:02.262560   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/custom-flannel-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:38:02.670340   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/calico-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:38:11.734560   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/false-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:38:17.460737   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/enable-default-cni-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:38:17.467180   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/enable-default-cni-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:38:17.478657   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/enable-default-cni-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:38:17.500150   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/enable-default-cni-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:38:17.541710   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/enable-default-cni-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:38:17.623276   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/enable-default-cni-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:38:17.784938   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/enable-default-cni-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:38:18.107309   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/enable-default-cni-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:38:18.749562   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/enable-default-cni-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:38:20.030985   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/enable-default-cni-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:38:22.252793   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/auto-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:38:22.592483   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/enable-default-cni-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:38:27.714773   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/enable-default-cni-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:38:32.215999   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/false-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:38:34.519653   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/addons-603759/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:38:34.916001   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/kindnet-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:38:37.956150   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/enable-default-cni-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:38:43.224081   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/custom-flannel-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:38:49.441914   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/flannel-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:38:49.448354   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/flannel-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:38:49.459812   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/flannel-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:38:49.481290   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/flannel-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:38:49.522784   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/flannel-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:38:49.604466   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/flannel-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:38:49.766062   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/flannel-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:38:50.087801   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/flannel-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:38:50.730088   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/flannel-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:38:52.011712   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/flannel-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:38:54.573894   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/flannel-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:38:58.437461   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/enable-default-cni-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:38:59.695455   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/flannel-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:39:09.937707   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/flannel-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:39:13.177451   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/false-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:39:24.592624   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/calico-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:39:30.419434   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/flannel-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:39:39.399296   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/enable-default-cni-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:39:42.830820   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/kubenet-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:39:42.837225   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/kubenet-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:39:42.848626   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/kubenet-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:39:42.870084   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/kubenet-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:39:42.911543   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/kubenet-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:39:42.993039   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/kubenet-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:39:43.154526   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/kubenet-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:39:43.476319   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/kubenet-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:39:43.875757   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/bridge-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:39:43.882162   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/bridge-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:39:43.893537   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/bridge-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:39:43.914994   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/bridge-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:39:43.956417   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/bridge-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:39:44.037840   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/bridge-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:39:44.118336   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/kubenet-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:39:44.200051   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/bridge-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:39:44.522166   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/bridge-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:39:45.163649   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/bridge-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:39:45.400278   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/kubenet-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:39:46.445866   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/bridge-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:39:47.284192   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/functional-576160/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:39:47.962454   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/kubenet-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:39:49.007245   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/bridge-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:39:53.084566   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/kubenet-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:39:54.129471   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/bridge-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:40:03.326543   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/kubenet-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:40:04.214062   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/functional-576160/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:40:04.371604   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/bridge-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:40:05.145527   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/custom-flannel-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:40:11.381131   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/flannel-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:40:23.807881   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/kubenet-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:40:24.853249   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/bridge-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:40:35.099604   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/false-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:40:38.390819   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/auto-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:40:50.583252   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/gvisor-061450/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:40:51.054995   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/kindnet-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:41:01.320690   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/enable-default-cni-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:41:04.770094   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/kubenet-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:41:05.815202   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/bridge-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:41:06.094994   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/auto-671228/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-332023 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0: (6m36.507845004s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-332023 -n old-k8s-version-332023
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (396.76s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (15.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-jmqcg" [120ff103-297a-47be-a585-2ad2ee33f955] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-jmqcg" [120ff103-297a-47be-a585-2ad2ee33f955] Running
E0217 12:41:18.758156   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/kindnet-671228/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 15.005477079s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (15.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-jmqcg" [120ff103-297a-47be-a585-2ad2ee33f955] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004719909s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-996039 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-996039 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-996039 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-996039 -n no-preload-996039
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-996039 -n no-preload-996039: exit status 2 (240.14592ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-996039 -n no-preload-996039
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-996039 -n no-preload-996039: exit status 2 (259.656735ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-996039 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-996039 -n no-preload-996039
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-996039 -n no-preload-996039
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.58s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (62.66s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-566873 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.32.1
E0217 12:41:33.303424   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/flannel-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:41:33.531543   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/skaffold-154700/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-566873 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.32.1: (1m2.663715879s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (62.66s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-bcxjl" [d13e1e03-9b31-4196-8c1d-2dd31fb88e0c] Running
E0217 12:41:40.729377   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/calico-671228/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00341948s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-bcxjl" [d13e1e03-9b31-4196-8c1d-2dd31fb88e0c] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003757936s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-004074 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-004074 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-004074 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-004074 -n embed-certs-004074
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-004074 -n embed-certs-004074: exit status 2 (246.430957ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-004074 -n embed-certs-004074
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-004074 -n embed-certs-004074: exit status 2 (241.845764ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-004074 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-004074 -n embed-certs-004074
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-004074 -n embed-certs-004074
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-vbf4s" [19640e69-da53-4f5d-9327-d8bf831e7bfc] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003282507s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-vbf4s" [19640e69-da53-4f5d-9327-d8bf831e7bfc] Running
E0217 12:42:21.285528   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/custom-flannel-671228/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00353283s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-467431 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-467431 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-467431 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-467431 -n default-k8s-diff-port-467431
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-467431 -n default-k8s-diff-port-467431: exit status 2 (250.369052ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-467431 -n default-k8s-diff-port-467431
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-467431 -n default-k8s-diff-port-467431: exit status 2 (248.63585ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-467431 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-467431 -n default-k8s-diff-port-467431
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-467431 -n default-k8s-diff-port-467431
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.82s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-566873 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.82s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (12.62s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-566873 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-566873 --alsologtostderr -v=3: (12.621914043s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (12.62s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-566873 -n newest-cni-566873
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-566873 -n newest-cni-566873: exit status 7 (65.584548ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-566873 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (37.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-566873 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.32.1
E0217 12:42:48.986886   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/custom-flannel-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:42:51.238189   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/false-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:42:56.595880   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/skaffold-154700/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:43:17.461452   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/enable-default-cni-671228/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:43:18.941489   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/false-671228/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-566873 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.32.1: (37.22167397s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-566873 -n newest-cni-566873
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (37.53s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-566873 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-566873 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-566873 -n newest-cni-566873
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-566873 -n newest-cni-566873: exit status 2 (230.254129ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-566873 -n newest-cni-566873
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-566873 -n newest-cni-566873: exit status 2 (236.303893ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-566873 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-566873 -n newest-cni-566873
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-566873 -n newest-cni-566873
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-6pd6l" [12ef93a6-ac94-49d7-8f37-bf858577b7d5] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003809512s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-6pd6l" [12ef93a6-ac94-49d7-8f37-bf858577b7d5] Running
E0217 12:44:17.145393   84502 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-77349/.minikube/profiles/flannel-671228/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004244014s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-332023 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-332023 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-332023 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-332023 -n old-k8s-version-332023
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-332023 -n old-k8s-version-332023: exit status 2 (233.268147ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-332023 -n old-k8s-version-332023
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-332023 -n old-k8s-version-332023: exit status 2 (231.152101ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-332023 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-332023 -n old-k8s-version-332023
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-332023 -n old-k8s-version-332023
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.24s)

                                                
                                    

Test skip (34/344)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.32.1/cached-images 0
15 TestDownloadOnly/v1.32.1/binaries 0
16 TestDownloadOnly/v1.32.1/kubectl 0
20 TestDownloadOnlyKic 0
33 TestAddons/serial/GCPAuth/RealCredentials 0
39 TestAddons/parallel/Olm 0
46 TestAddons/parallel/AmdGpuDevicePlugin 0
53 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
108 TestFunctional/parallel/PodmanEnv 0
131 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
132 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
133 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
134 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
135 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
136 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
137 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
138 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
157 TestFunctionalNewestKubernetes 0
187 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
214 TestKicCustomNetwork 0
215 TestKicExistingNetwork 0
216 TestKicCustomSubnet 0
217 TestKicStaticIP 0
249 TestChangeNoneUser 0
252 TestScheduledStopWindows 0
256 TestInsufficientStorage 0
260 TestMissingContainerUpgrade 0
273 TestNetworkPlugins/group/cilium 3.84
283 TestStartStop/group/disable-driver-mounts 0.16
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:567: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:84: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-671228 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-671228

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-671228

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-671228

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-671228

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-671228

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-671228

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-671228

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-671228

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-671228

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-671228

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-671228" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-671228"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-671228" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-671228"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-671228" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-671228"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-671228

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-671228" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-671228"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-671228" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-671228"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-671228" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-671228" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-671228" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-671228" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-671228" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-671228" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-671228" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-671228" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-671228" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-671228"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-671228" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-671228"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-671228" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-671228"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-671228" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-671228"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-671228" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-671228"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-671228

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-671228

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-671228" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-671228" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-671228

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-671228

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-671228" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-671228" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-671228" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-671228" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-671228" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-671228" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-671228"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-671228" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-671228"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-671228" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-671228"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-671228" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-671228"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-671228" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-671228"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-671228

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-671228" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-671228"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-671228" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-671228"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-671228" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-671228"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-671228" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-671228"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-671228" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-671228"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-671228" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-671228"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-671228" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-671228"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-671228" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-671228"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-671228" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-671228"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-671228" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-671228"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-671228" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-671228"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-671228" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-671228"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-671228" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-671228"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-671228" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-671228"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-671228" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-671228"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-671228" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-671228"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-671228" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-671228"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-671228" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-671228"

                                                
                                                
----------------------- debugLogs end: cilium-671228 [took: 3.686054359s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-671228" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-671228
--- SKIP: TestNetworkPlugins/group/cilium (3.84s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-122532" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-122532
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
Copied to clipboard