Test Report: KVM_Linux 19364

                    
                      663d17776bbce0b1e831c154f8973876d77c5fd1:2024-08-04:35636
                    
                

Test fail (1/349)

Order failed test Duration
176 TestMultiControlPlane/serial/RestartSecondaryNode 138.13
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (138.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 node start m02 -v=7 --alsologtostderr
E0804 00:38:01.833159   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/functional-168863/client.crt: no such file or directory
ha_test.go:420: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-230158 node start m02 -v=7 --alsologtostderr: exit status 90 (1m18.086881165s)

                                                
                                                
-- stdout --
	* Starting "ha-230158-m02" control-plane node in "ha-230158" cluster
	* Restarting existing kvm2 VM for "ha-230158-m02" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0804 00:37:58.331977   25510 out.go:291] Setting OutFile to fd 1 ...
	I0804 00:37:58.332120   25510 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:37:58.332130   25510 out.go:304] Setting ErrFile to fd 2...
	I0804 00:37:58.332141   25510 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:37:58.332317   25510 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-3947/.minikube/bin
	I0804 00:37:58.332555   25510 mustload.go:65] Loading cluster: ha-230158
	I0804 00:37:58.332888   25510 config.go:182] Loaded profile config "ha-230158": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0804 00:37:58.333279   25510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:37:58.333322   25510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:37:58.348801   25510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40397
	I0804 00:37:58.349217   25510 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:37:58.349800   25510 main.go:141] libmachine: Using API Version  1
	I0804 00:37:58.349821   25510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:37:58.350182   25510 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:37:58.350406   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetState
	W0804 00:37:58.352024   25510 host.go:58] "ha-230158-m02" host status: Stopped
	I0804 00:37:58.354076   25510 out.go:177] * Starting "ha-230158-m02" control-plane node in "ha-230158" cluster
	I0804 00:37:58.355485   25510 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0804 00:37:58.355536   25510 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19364-3947/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0804 00:37:58.355553   25510 cache.go:56] Caching tarball of preloaded images
	I0804 00:37:58.355653   25510 preload.go:172] Found /home/jenkins/minikube-integration/19364-3947/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0804 00:37:58.355665   25510 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0804 00:37:58.355778   25510 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/config.json ...
	I0804 00:37:58.355958   25510 start.go:360] acquireMachinesLock for ha-230158-m02: {Name:mk3c8b650475b5a29be5f1e49e0345d4de7c1632 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0804 00:37:58.356011   25510 start.go:364] duration metric: took 24.45µs to acquireMachinesLock for "ha-230158-m02"
	I0804 00:37:58.356028   25510 start.go:96] Skipping create...Using existing machine configuration
	I0804 00:37:58.356038   25510 fix.go:54] fixHost starting: m02
	I0804 00:37:58.356354   25510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:37:58.356386   25510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:37:58.371434   25510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40781
	I0804 00:37:58.371809   25510 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:37:58.372299   25510 main.go:141] libmachine: Using API Version  1
	I0804 00:37:58.372319   25510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:37:58.372716   25510 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:37:58.372896   25510 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
	I0804 00:37:58.373043   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetState
	I0804 00:37:58.374382   25510 fix.go:112] recreateIfNeeded on ha-230158-m02: state=Stopped err=<nil>
	I0804 00:37:58.374406   25510 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
	W0804 00:37:58.374556   25510 fix.go:138] unexpected machine state, will restart: <nil>
	I0804 00:37:58.376389   25510 out.go:177] * Restarting existing kvm2 VM for "ha-230158-m02" ...
	I0804 00:37:58.377504   25510 main.go:141] libmachine: (ha-230158-m02) Calling .Start
	I0804 00:37:58.377660   25510 main.go:141] libmachine: (ha-230158-m02) Ensuring networks are active...
	I0804 00:37:58.378211   25510 main.go:141] libmachine: (ha-230158-m02) Ensuring network default is active
	I0804 00:37:58.378552   25510 main.go:141] libmachine: (ha-230158-m02) Ensuring network mk-ha-230158 is active
	I0804 00:37:58.378934   25510 main.go:141] libmachine: (ha-230158-m02) Getting domain xml...
	I0804 00:37:58.379491   25510 main.go:141] libmachine: (ha-230158-m02) Creating domain...
	I0804 00:37:59.624645   25510 main.go:141] libmachine: (ha-230158-m02) Waiting to get IP...
	I0804 00:37:59.625595   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:37:59.626042   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has current primary IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:37:59.626081   25510 main.go:141] libmachine: (ha-230158-m02) Found IP for machine: 192.168.39.188
	I0804 00:37:59.626095   25510 main.go:141] libmachine: (ha-230158-m02) Reserving static IP address...
	I0804 00:37:59.626602   25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "ha-230158-m02", mac: "52:54:00:18:6b:a7", ip: "192.168.39.188"} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
	I0804 00:37:59.626628   25510 main.go:141] libmachine: (ha-230158-m02) Reserved static IP address: 192.168.39.188
	I0804 00:37:59.626649   25510 main.go:141] libmachine: (ha-230158-m02) DBG | skip adding static IP to network mk-ha-230158 - found existing host DHCP lease matching {name: "ha-230158-m02", mac: "52:54:00:18:6b:a7", ip: "192.168.39.188"}
	I0804 00:37:59.626667   25510 main.go:141] libmachine: (ha-230158-m02) DBG | Getting to WaitForSSH function...
	I0804 00:37:59.626679   25510 main.go:141] libmachine: (ha-230158-m02) Waiting for SSH to be available...
	I0804 00:37:59.628881   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:37:59.629315   25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
	I0804 00:37:59.629341   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:37:59.629473   25510 main.go:141] libmachine: (ha-230158-m02) DBG | Using SSH client type: external
	I0804 00:37:59.629501   25510 main.go:141] libmachine: (ha-230158-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/id_rsa (-rw-------)
	I0804 00:37:59.629576   25510 main.go:141] libmachine: (ha-230158-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.188 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0804 00:37:59.629606   25510 main.go:141] libmachine: (ha-230158-m02) DBG | About to run SSH command:
	I0804 00:37:59.629620   25510 main.go:141] libmachine: (ha-230158-m02) DBG | exit 0
	I0804 00:38:10.770721   25510 main.go:141] libmachine: (ha-230158-m02) DBG | SSH cmd err, output: <nil>: 
	I0804 00:38:10.771116   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetConfigRaw
	I0804 00:38:10.771802   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetIP
	I0804 00:38:10.774556   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:38:10.775061   25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
	I0804 00:38:10.775093   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:38:10.775352   25510 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/config.json ...
	I0804 00:38:10.775590   25510 machine.go:94] provisionDockerMachine start ...
	I0804 00:38:10.775613   25510 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
	I0804 00:38:10.775828   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
	I0804 00:38:10.778196   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:38:10.778563   25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
	I0804 00:38:10.778587   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:38:10.778743   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
	I0804 00:38:10.778896   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
	I0804 00:38:10.779103   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
	I0804 00:38:10.779249   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
	I0804 00:38:10.779397   25510 main.go:141] libmachine: Using SSH client type: native
	I0804 00:38:10.779583   25510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I0804 00:38:10.779595   25510 main.go:141] libmachine: About to run SSH command:
	hostname
	I0804 00:38:10.894581   25510 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0804 00:38:10.894614   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetMachineName
	I0804 00:38:10.894814   25510 buildroot.go:166] provisioning hostname "ha-230158-m02"
	I0804 00:38:10.894837   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetMachineName
	I0804 00:38:10.895004   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
	I0804 00:38:10.897476   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:38:10.897844   25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
	I0804 00:38:10.897881   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:38:10.897983   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
	I0804 00:38:10.898155   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
	I0804 00:38:10.898354   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
	I0804 00:38:10.898508   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
	I0804 00:38:10.898677   25510 main.go:141] libmachine: Using SSH client type: native
	I0804 00:38:10.898885   25510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I0804 00:38:10.898903   25510 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-230158-m02 && echo "ha-230158-m02" | sudo tee /etc/hostname
	I0804 00:38:11.026137   25510 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-230158-m02
	
	I0804 00:38:11.026164   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
	I0804 00:38:11.029047   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:38:11.029537   25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
	I0804 00:38:11.029569   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:38:11.029738   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
	I0804 00:38:11.029932   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
	I0804 00:38:11.030104   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
	I0804 00:38:11.030262   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
	I0804 00:38:11.030442   25510 main.go:141] libmachine: Using SSH client type: native
	I0804 00:38:11.030614   25510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I0804 00:38:11.030630   25510 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-230158-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-230158-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-230158-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 00:38:11.156086   25510 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 00:38:11.156110   25510 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-3947/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-3947/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-3947/.minikube}
	I0804 00:38:11.156138   25510 buildroot.go:174] setting up certificates
	I0804 00:38:11.156146   25510 provision.go:84] configureAuth start
	I0804 00:38:11.156154   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetMachineName
	I0804 00:38:11.156432   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetIP
	I0804 00:38:11.159121   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:38:11.159564   25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
	I0804 00:38:11.159594   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:38:11.159837   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
	I0804 00:38:11.162124   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:38:11.162514   25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
	I0804 00:38:11.162546   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:38:11.162670   25510 provision.go:143] copyHostCerts
	I0804 00:38:11.162702   25510 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19364-3947/.minikube/key.pem
	I0804 00:38:11.162745   25510 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-3947/.minikube/key.pem, removing ...
	I0804 00:38:11.162757   25510 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-3947/.minikube/key.pem
	I0804 00:38:11.162841   25510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-3947/.minikube/key.pem (1679 bytes)
	I0804 00:38:11.162933   25510 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19364-3947/.minikube/ca.pem
	I0804 00:38:11.162957   25510 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-3947/.minikube/ca.pem, removing ...
	I0804 00:38:11.162964   25510 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-3947/.minikube/ca.pem
	I0804 00:38:11.163001   25510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-3947/.minikube/ca.pem (1082 bytes)
	I0804 00:38:11.163058   25510 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19364-3947/.minikube/cert.pem
	I0804 00:38:11.163087   25510 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-3947/.minikube/cert.pem, removing ...
	I0804 00:38:11.163096   25510 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-3947/.minikube/cert.pem
	I0804 00:38:11.163133   25510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-3947/.minikube/cert.pem (1123 bytes)
	I0804 00:38:11.163210   25510 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-3947/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca-key.pem org=jenkins.ha-230158-m02 san=[127.0.0.1 192.168.39.188 ha-230158-m02 localhost minikube]
	I0804 00:38:11.457749   25510 provision.go:177] copyRemoteCerts
	I0804 00:38:11.457804   25510 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 00:38:11.457831   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
	I0804 00:38:11.460834   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:38:11.461178   25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
	I0804 00:38:11.461216   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:38:11.461413   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
	I0804 00:38:11.461642   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
	I0804 00:38:11.461792   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
	I0804 00:38:11.462029   25510 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/id_rsa Username:docker}
	I0804 00:38:11.552505   25510 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0804 00:38:11.552571   25510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 00:38:11.577189   25510 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0804 00:38:11.577290   25510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0804 00:38:11.602036   25510 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0804 00:38:11.602101   25510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0804 00:38:11.625880   25510 provision.go:87] duration metric: took 469.715717ms to configureAuth
	I0804 00:38:11.625907   25510 buildroot.go:189] setting minikube options for container-runtime
	I0804 00:38:11.626132   25510 config.go:182] Loaded profile config "ha-230158": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0804 00:38:11.626154   25510 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
	I0804 00:38:11.626421   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
	I0804 00:38:11.629200   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:38:11.629715   25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
	I0804 00:38:11.629742   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:38:11.629913   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
	I0804 00:38:11.630078   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
	I0804 00:38:11.630223   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
	I0804 00:38:11.630379   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
	I0804 00:38:11.630558   25510 main.go:141] libmachine: Using SSH client type: native
	I0804 00:38:11.630716   25510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I0804 00:38:11.630727   25510 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0804 00:38:11.748109   25510 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0804 00:38:11.748129   25510 buildroot.go:70] root file system type: tmpfs
	I0804 00:38:11.748260   25510 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0804 00:38:11.748288   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
	I0804 00:38:11.751057   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:38:11.751421   25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
	I0804 00:38:11.751455   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:38:11.751712   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
	I0804 00:38:11.751977   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
	I0804 00:38:11.752136   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
	I0804 00:38:11.752311   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
	I0804 00:38:11.752476   25510 main.go:141] libmachine: Using SSH client type: native
	I0804 00:38:11.752674   25510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I0804 00:38:11.752768   25510 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0804 00:38:11.885782   25510 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0804 00:38:11.885830   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
	I0804 00:38:11.888701   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:38:11.889069   25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
	I0804 00:38:11.889098   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:38:11.889241   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
	I0804 00:38:11.889427   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
	I0804 00:38:11.889701   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
	I0804 00:38:11.889860   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
	I0804 00:38:11.890052   25510 main.go:141] libmachine: Using SSH client type: native
	I0804 00:38:11.890250   25510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I0804 00:38:11.890274   25510 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0804 00:38:13.843420   25510 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0804 00:38:13.843461   25510 machine.go:97] duration metric: took 3.067856975s to provisionDockerMachine
	I0804 00:38:13.843473   25510 start.go:293] postStartSetup for "ha-230158-m02" (driver="kvm2")
	I0804 00:38:13.843482   25510 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 00:38:13.843498   25510 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
	I0804 00:38:13.843800   25510 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 00:38:13.843831   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
	I0804 00:38:13.846779   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:38:13.847277   25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
	I0804 00:38:13.847305   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:38:13.847479   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
	I0804 00:38:13.847712   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
	I0804 00:38:13.847892   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
	I0804 00:38:13.848015   25510 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/id_rsa Username:docker}
	I0804 00:38:13.937619   25510 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 00:38:13.941892   25510 info.go:137] Remote host: Buildroot 2023.02.9
	I0804 00:38:13.941913   25510 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-3947/.minikube/addons for local assets ...
	I0804 00:38:13.941999   25510 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-3947/.minikube/files for local assets ...
	I0804 00:38:13.942104   25510 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-3947/.minikube/files/etc/ssl/certs/111362.pem -> 111362.pem in /etc/ssl/certs
	I0804 00:38:13.942117   25510 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/files/etc/ssl/certs/111362.pem -> /etc/ssl/certs/111362.pem
	I0804 00:38:13.942261   25510 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0804 00:38:13.952175   25510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/files/etc/ssl/certs/111362.pem --> /etc/ssl/certs/111362.pem (1708 bytes)
	I0804 00:38:13.976761   25510 start.go:296] duration metric: took 133.275449ms for postStartSetup
	I0804 00:38:13.976800   25510 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
	I0804 00:38:13.977069   25510 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0804 00:38:13.977090   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
	I0804 00:38:13.980173   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:38:13.980544   25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
	I0804 00:38:13.980596   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:38:13.980800   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
	I0804 00:38:13.981072   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
	I0804 00:38:13.981269   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
	I0804 00:38:13.981412   25510 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/id_rsa Username:docker}
	I0804 00:38:14.071182   25510 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0804 00:38:14.071254   25510 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0804 00:38:14.130544   25510 fix.go:56] duration metric: took 15.774500667s for fixHost
	I0804 00:38:14.130591   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
	I0804 00:38:14.133406   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:38:14.133762   25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
	I0804 00:38:14.133788   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:38:14.133983   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
	I0804 00:38:14.134181   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
	I0804 00:38:14.134372   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
	I0804 00:38:14.134501   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
	I0804 00:38:14.134694   25510 main.go:141] libmachine: Using SSH client type: native
	I0804 00:38:14.134887   25510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I0804 00:38:14.134901   25510 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0804 00:38:14.255857   25510 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722731894.223692124
	
	I0804 00:38:14.255885   25510 fix.go:216] guest clock: 1722731894.223692124
	I0804 00:38:14.255908   25510 fix.go:229] Guest: 2024-08-04 00:38:14.223692124 +0000 UTC Remote: 2024-08-04 00:38:14.130571736 +0000 UTC m=+15.831243026 (delta=93.120388ms)
	I0804 00:38:14.255935   25510 fix.go:200] guest clock delta is within tolerance: 93.120388ms
	I0804 00:38:14.255944   25510 start.go:83] releasing machines lock for "ha-230158-m02", held for 15.899924306s
	I0804 00:38:14.255973   25510 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
	I0804 00:38:14.256217   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetIP
	I0804 00:38:14.258949   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:38:14.259352   25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
	I0804 00:38:14.259371   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:38:14.259571   25510 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
	I0804 00:38:14.260000   25510 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
	I0804 00:38:14.260224   25510 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
	I0804 00:38:14.260339   25510 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 00:38:14.260409   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
	I0804 00:38:14.260481   25510 ssh_runner.go:195] Run: systemctl --version
	I0804 00:38:14.260503   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
	I0804 00:38:14.263324   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:38:14.263556   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:38:14.263723   25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
	I0804 00:38:14.263748   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:38:14.263884   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
	I0804 00:38:14.264008   25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
	I0804 00:38:14.264031   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:38:14.264072   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
	I0804 00:38:14.264152   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
	I0804 00:38:14.264243   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
	I0804 00:38:14.264326   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
	I0804 00:38:14.264387   25510 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/id_rsa Username:docker}
	I0804 00:38:14.264474   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
	I0804 00:38:14.264609   25510 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/id_rsa Username:docker}
	I0804 00:38:14.371161   25510 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0804 00:38:14.376988   25510 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0804 00:38:14.377057   25510 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 00:38:14.397803   25510 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0804 00:38:14.397829   25510 start.go:495] detecting cgroup driver to use...
	I0804 00:38:14.397967   25510 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 00:38:14.420340   25510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0804 00:38:14.432632   25510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0804 00:38:14.444438   25510 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0804 00:38:14.444485   25510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0804 00:38:14.455993   25510 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0804 00:38:14.468484   25510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0804 00:38:14.480157   25510 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0804 00:38:14.492396   25510 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 00:38:14.503333   25510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0804 00:38:14.513683   25510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0804 00:38:14.524306   25510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0804 00:38:14.534845   25510 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 00:38:14.546058   25510 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 00:38:14.556163   25510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:38:14.675840   25510 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0804 00:38:14.702706   25510 start.go:495] detecting cgroup driver to use...
	I0804 00:38:14.702806   25510 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0804 00:38:14.725870   25510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0804 00:38:14.744691   25510 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0804 00:38:14.775797   25510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0804 00:38:14.789716   25510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0804 00:38:14.802691   25510 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0804 00:38:14.826208   25510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0804 00:38:14.839810   25510 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 00:38:14.859860   25510 ssh_runner.go:195] Run: which cri-dockerd
	I0804 00:38:14.864004   25510 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0804 00:38:14.873703   25510 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0804 00:38:14.891236   25510 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0804 00:38:15.012240   25510 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0804 00:38:15.137153   25510 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0804 00:38:15.137313   25510 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0804 00:38:15.155559   25510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:38:15.276327   25510 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0804 00:39:16.351082   25510 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.074721712s)
	I0804 00:39:16.351156   25510 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0804 00:39:16.372451   25510 out.go:177] 
	W0804 00:39:16.373746   25510 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 04 00:38:12 ha-230158-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 04 00:38:12 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:12.326177741Z" level=info msg="Starting up"
	Aug 04 00:38:12 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:12.327119521Z" level=info msg="containerd not running, starting managed containerd"
	Aug 04 00:38:12 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:12.328077611Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=495
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.357083625Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.380119843Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.380244399Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.380327326Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.380365537Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.380659854Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.380746850Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.380936636Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.380980166Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.381089469Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.381129276Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.381357657Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.381722077Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.383943023Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.384068421Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.384246838Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.384299443Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.384545997Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.384617831Z" level=info msg="metadata content store policy set" policy=shared
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388127474Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388219544Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388276421Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388319410Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388361671Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388455180Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388694738Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388804208Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388845843Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388892231Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388935349Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388976334Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389099850Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389142923Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389183640Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389240347Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389279107Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389315090Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389370248Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389408112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389451331Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389494375Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389530635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389577103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389617512Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389658338Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389704850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389746329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389781917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389817387Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389854329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389893335Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389945127Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389981949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390070588Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390151066Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390196084Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390231931Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390268726Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390302779Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390339825Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390382329Z" level=info msg="NRI interface is disabled by configuration."
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390645097Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390719485Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390779483Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390823688Z" level=info msg="containerd successfully booted in 0.035317s"
	Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.355694047Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.417198292Z" level=info msg="Loading containers: start."
	Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.603908628Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.697697573Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.760132523Z" level=info msg="Loading containers: done."
	Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.774708591Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.775080161Z" level=info msg="Daemon has completed initialization"
	Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.809171865Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 04 00:38:13 ha-230158-m02 systemd[1]: Started Docker Application Container Engine.
	Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.809357764Z" level=info msg="API listen on [::]:2376"
	Aug 04 00:38:15 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:15.262432246Z" level=info msg="Processing signal 'terminated'"
	Aug 04 00:38:15 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:15.264385339Z" level=info msg="Daemon shutdown complete"
	Aug 04 00:38:15 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:15.264545438Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 04 00:38:15 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:15.264639728Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 04 00:38:15 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:15.265397657Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Aug 04 00:38:15 ha-230158-m02 systemd[1]: Stopping Docker Application Container Engine...
	Aug 04 00:38:16 ha-230158-m02 systemd[1]: docker.service: Deactivated successfully.
	Aug 04 00:38:16 ha-230158-m02 systemd[1]: Stopped Docker Application Container Engine.
	Aug 04 00:38:16 ha-230158-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 04 00:38:16 ha-230158-m02 dockerd[1098]: time="2024-08-04T00:38:16.310736920Z" level=info msg="Starting up"
	Aug 04 00:39:16 ha-230158-m02 dockerd[1098]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 04 00:39:16 ha-230158-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 04 00:39:16 ha-230158-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 04 00:39:16 ha-230158-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 04 00:38:12 ha-230158-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 04 00:38:12 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:12.326177741Z" level=info msg="Starting up"
	Aug 04 00:38:12 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:12.327119521Z" level=info msg="containerd not running, starting managed containerd"
	Aug 04 00:38:12 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:12.328077611Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=495
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.357083625Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.380119843Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.380244399Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.380327326Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.380365537Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.380659854Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.380746850Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.380936636Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.380980166Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.381089469Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.381129276Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.381357657Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.381722077Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.383943023Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.384068421Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.384246838Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.384299443Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.384545997Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.384617831Z" level=info msg="metadata content store policy set" policy=shared
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388127474Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388219544Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388276421Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388319410Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388361671Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388455180Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388694738Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388804208Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388845843Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388892231Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388935349Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388976334Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389099850Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389142923Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389183640Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389240347Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389279107Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389315090Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389370248Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389408112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389451331Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389494375Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389530635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389577103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389617512Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389658338Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389704850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389746329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389781917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389817387Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389854329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389893335Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389945127Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389981949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390070588Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390151066Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390196084Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390231931Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390268726Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390302779Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390339825Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390382329Z" level=info msg="NRI interface is disabled by configuration."
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390645097Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390719485Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390779483Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390823688Z" level=info msg="containerd successfully booted in 0.035317s"
	Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.355694047Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.417198292Z" level=info msg="Loading containers: start."
	Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.603908628Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.697697573Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.760132523Z" level=info msg="Loading containers: done."
	Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.774708591Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.775080161Z" level=info msg="Daemon has completed initialization"
	Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.809171865Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 04 00:38:13 ha-230158-m02 systemd[1]: Started Docker Application Container Engine.
	Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.809357764Z" level=info msg="API listen on [::]:2376"
	Aug 04 00:38:15 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:15.262432246Z" level=info msg="Processing signal 'terminated'"
	Aug 04 00:38:15 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:15.264385339Z" level=info msg="Daemon shutdown complete"
	Aug 04 00:38:15 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:15.264545438Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 04 00:38:15 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:15.264639728Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 04 00:38:15 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:15.265397657Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Aug 04 00:38:15 ha-230158-m02 systemd[1]: Stopping Docker Application Container Engine...
	Aug 04 00:38:16 ha-230158-m02 systemd[1]: docker.service: Deactivated successfully.
	Aug 04 00:38:16 ha-230158-m02 systemd[1]: Stopped Docker Application Container Engine.
	Aug 04 00:38:16 ha-230158-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 04 00:38:16 ha-230158-m02 dockerd[1098]: time="2024-08-04T00:38:16.310736920Z" level=info msg="Starting up"
	Aug 04 00:39:16 ha-230158-m02 dockerd[1098]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 04 00:39:16 ha-230158-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 04 00:39:16 ha-230158-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 04 00:39:16 ha-230158-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0804 00:39:16.373803   25510 out.go:239] * 
	* 
	W0804 00:39:16.376664   25510 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0804 00:39:16.378346   25510 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0804 00:37:58.331977   25510 out.go:291] Setting OutFile to fd 1 ...
I0804 00:37:58.332120   25510 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0804 00:37:58.332130   25510 out.go:304] Setting ErrFile to fd 2...
I0804 00:37:58.332141   25510 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0804 00:37:58.332317   25510 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-3947/.minikube/bin
I0804 00:37:58.332555   25510 mustload.go:65] Loading cluster: ha-230158
I0804 00:37:58.332888   25510 config.go:182] Loaded profile config "ha-230158": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0804 00:37:58.333279   25510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:37:58.333322   25510 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:37:58.348801   25510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40397
I0804 00:37:58.349217   25510 main.go:141] libmachine: () Calling .GetVersion
I0804 00:37:58.349800   25510 main.go:141] libmachine: Using API Version  1
I0804 00:37:58.349821   25510 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:37:58.350182   25510 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:37:58.350406   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetState
W0804 00:37:58.352024   25510 host.go:58] "ha-230158-m02" host status: Stopped
I0804 00:37:58.354076   25510 out.go:177] * Starting "ha-230158-m02" control-plane node in "ha-230158" cluster
I0804 00:37:58.355485   25510 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
I0804 00:37:58.355536   25510 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19364-3947/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
I0804 00:37:58.355553   25510 cache.go:56] Caching tarball of preloaded images
I0804 00:37:58.355653   25510 preload.go:172] Found /home/jenkins/minikube-integration/19364-3947/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0804 00:37:58.355665   25510 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
I0804 00:37:58.355778   25510 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/config.json ...
I0804 00:37:58.355958   25510 start.go:360] acquireMachinesLock for ha-230158-m02: {Name:mk3c8b650475b5a29be5f1e49e0345d4de7c1632 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0804 00:37:58.356011   25510 start.go:364] duration metric: took 24.45µs to acquireMachinesLock for "ha-230158-m02"
I0804 00:37:58.356028   25510 start.go:96] Skipping create...Using existing machine configuration
I0804 00:37:58.356038   25510 fix.go:54] fixHost starting: m02
I0804 00:37:58.356354   25510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:37:58.356386   25510 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:37:58.371434   25510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40781
I0804 00:37:58.371809   25510 main.go:141] libmachine: () Calling .GetVersion
I0804 00:37:58.372299   25510 main.go:141] libmachine: Using API Version  1
I0804 00:37:58.372319   25510 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:37:58.372716   25510 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:37:58.372896   25510 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
I0804 00:37:58.373043   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetState
I0804 00:37:58.374382   25510 fix.go:112] recreateIfNeeded on ha-230158-m02: state=Stopped err=<nil>
I0804 00:37:58.374406   25510 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
W0804 00:37:58.374556   25510 fix.go:138] unexpected machine state, will restart: <nil>
I0804 00:37:58.376389   25510 out.go:177] * Restarting existing kvm2 VM for "ha-230158-m02" ...
I0804 00:37:58.377504   25510 main.go:141] libmachine: (ha-230158-m02) Calling .Start
I0804 00:37:58.377660   25510 main.go:141] libmachine: (ha-230158-m02) Ensuring networks are active...
I0804 00:37:58.378211   25510 main.go:141] libmachine: (ha-230158-m02) Ensuring network default is active
I0804 00:37:58.378552   25510 main.go:141] libmachine: (ha-230158-m02) Ensuring network mk-ha-230158 is active
I0804 00:37:58.378934   25510 main.go:141] libmachine: (ha-230158-m02) Getting domain xml...
I0804 00:37:58.379491   25510 main.go:141] libmachine: (ha-230158-m02) Creating domain...
I0804 00:37:59.624645   25510 main.go:141] libmachine: (ha-230158-m02) Waiting to get IP...
I0804 00:37:59.625595   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:37:59.626042   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has current primary IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:37:59.626081   25510 main.go:141] libmachine: (ha-230158-m02) Found IP for machine: 192.168.39.188
I0804 00:37:59.626095   25510 main.go:141] libmachine: (ha-230158-m02) Reserving static IP address...
I0804 00:37:59.626602   25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "ha-230158-m02", mac: "52:54:00:18:6b:a7", ip: "192.168.39.188"} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:37:59.626628   25510 main.go:141] libmachine: (ha-230158-m02) Reserved static IP address: 192.168.39.188
I0804 00:37:59.626649   25510 main.go:141] libmachine: (ha-230158-m02) DBG | skip adding static IP to network mk-ha-230158 - found existing host DHCP lease matching {name: "ha-230158-m02", mac: "52:54:00:18:6b:a7", ip: "192.168.39.188"}
I0804 00:37:59.626667   25510 main.go:141] libmachine: (ha-230158-m02) DBG | Getting to WaitForSSH function...
I0804 00:37:59.626679   25510 main.go:141] libmachine: (ha-230158-m02) Waiting for SSH to be available...
I0804 00:37:59.628881   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:37:59.629315   25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:37:59.629341   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:37:59.629473   25510 main.go:141] libmachine: (ha-230158-m02) DBG | Using SSH client type: external
I0804 00:37:59.629501   25510 main.go:141] libmachine: (ha-230158-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/id_rsa (-rw-------)
I0804 00:37:59.629576   25510 main.go:141] libmachine: (ha-230158-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.188 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
I0804 00:37:59.629606   25510 main.go:141] libmachine: (ha-230158-m02) DBG | About to run SSH command:
I0804 00:37:59.629620   25510 main.go:141] libmachine: (ha-230158-m02) DBG | exit 0
I0804 00:38:10.770721   25510 main.go:141] libmachine: (ha-230158-m02) DBG | SSH cmd err, output: <nil>: 
I0804 00:38:10.771116   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetConfigRaw
I0804 00:38:10.771802   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetIP
I0804 00:38:10.774556   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:10.775061   25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:38:10.775093   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:10.775352   25510 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/config.json ...
I0804 00:38:10.775590   25510 machine.go:94] provisionDockerMachine start ...
I0804 00:38:10.775613   25510 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
I0804 00:38:10.775828   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
I0804 00:38:10.778196   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:10.778563   25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:38:10.778587   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:10.778743   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
I0804 00:38:10.778896   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:38:10.779103   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:38:10.779249   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
I0804 00:38:10.779397   25510 main.go:141] libmachine: Using SSH client type: native
I0804 00:38:10.779583   25510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
I0804 00:38:10.779595   25510 main.go:141] libmachine: About to run SSH command:
hostname
I0804 00:38:10.894581   25510 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube

                                                
                                                
I0804 00:38:10.894614   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetMachineName
I0804 00:38:10.894814   25510 buildroot.go:166] provisioning hostname "ha-230158-m02"
I0804 00:38:10.894837   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetMachineName
I0804 00:38:10.895004   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
I0804 00:38:10.897476   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:10.897844   25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:38:10.897881   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:10.897983   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
I0804 00:38:10.898155   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:38:10.898354   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:38:10.898508   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
I0804 00:38:10.898677   25510 main.go:141] libmachine: Using SSH client type: native
I0804 00:38:10.898885   25510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
I0804 00:38:10.898903   25510 main.go:141] libmachine: About to run SSH command:
sudo hostname ha-230158-m02 && echo "ha-230158-m02" | sudo tee /etc/hostname
I0804 00:38:11.026137   25510 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-230158-m02

                                                
                                                
I0804 00:38:11.026164   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
I0804 00:38:11.029047   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:11.029537   25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:38:11.029569   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:11.029738   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
I0804 00:38:11.029932   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:38:11.030104   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:38:11.030262   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
I0804 00:38:11.030442   25510 main.go:141] libmachine: Using SSH client type: native
I0804 00:38:11.030614   25510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
I0804 00:38:11.030630   25510 main.go:141] libmachine: About to run SSH command:

                                                
                                                
		if ! grep -xq '.*\sha-230158-m02' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-230158-m02/g' /etc/hosts;
			else 
				echo '127.0.1.1 ha-230158-m02' | sudo tee -a /etc/hosts; 
			fi
		fi
I0804 00:38:11.156086   25510 main.go:141] libmachine: SSH cmd err, output: <nil>: 
I0804 00:38:11.156110   25510 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-3947/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-3947/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-3947/.minikube}
I0804 00:38:11.156138   25510 buildroot.go:174] setting up certificates
I0804 00:38:11.156146   25510 provision.go:84] configureAuth start
I0804 00:38:11.156154   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetMachineName
I0804 00:38:11.156432   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetIP
I0804 00:38:11.159121   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:11.159564   25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:38:11.159594   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:11.159837   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
I0804 00:38:11.162124   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:11.162514   25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:38:11.162546   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:11.162670   25510 provision.go:143] copyHostCerts
I0804 00:38:11.162702   25510 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19364-3947/.minikube/key.pem
I0804 00:38:11.162745   25510 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-3947/.minikube/key.pem, removing ...
I0804 00:38:11.162757   25510 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-3947/.minikube/key.pem
I0804 00:38:11.162841   25510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-3947/.minikube/key.pem (1679 bytes)
I0804 00:38:11.162933   25510 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19364-3947/.minikube/ca.pem
I0804 00:38:11.162957   25510 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-3947/.minikube/ca.pem, removing ...
I0804 00:38:11.162964   25510 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-3947/.minikube/ca.pem
I0804 00:38:11.163001   25510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-3947/.minikube/ca.pem (1082 bytes)
I0804 00:38:11.163058   25510 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19364-3947/.minikube/cert.pem
I0804 00:38:11.163087   25510 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-3947/.minikube/cert.pem, removing ...
I0804 00:38:11.163096   25510 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-3947/.minikube/cert.pem
I0804 00:38:11.163133   25510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-3947/.minikube/cert.pem (1123 bytes)
I0804 00:38:11.163210   25510 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-3947/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca-key.pem org=jenkins.ha-230158-m02 san=[127.0.0.1 192.168.39.188 ha-230158-m02 localhost minikube]
I0804 00:38:11.457749   25510 provision.go:177] copyRemoteCerts
I0804 00:38:11.457804   25510 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0804 00:38:11.457831   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
I0804 00:38:11.460834   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:11.461178   25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:38:11.461216   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:11.461413   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
I0804 00:38:11.461642   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:38:11.461792   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
I0804 00:38:11.462029   25510 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/id_rsa Username:docker}
I0804 00:38:11.552505   25510 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0804 00:38:11.552571   25510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0804 00:38:11.577189   25510 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/machines/server.pem -> /etc/docker/server.pem
I0804 00:38:11.577290   25510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I0804 00:38:11.602036   25510 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0804 00:38:11.602101   25510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0804 00:38:11.625880   25510 provision.go:87] duration metric: took 469.715717ms to configureAuth
I0804 00:38:11.625907   25510 buildroot.go:189] setting minikube options for container-runtime
I0804 00:38:11.626132   25510 config.go:182] Loaded profile config "ha-230158": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0804 00:38:11.626154   25510 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
I0804 00:38:11.626421   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
I0804 00:38:11.629200   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:11.629715   25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:38:11.629742   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:11.629913   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
I0804 00:38:11.630078   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:38:11.630223   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:38:11.630379   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
I0804 00:38:11.630558   25510 main.go:141] libmachine: Using SSH client type: native
I0804 00:38:11.630716   25510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
I0804 00:38:11.630727   25510 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0804 00:38:11.748109   25510 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs

                                                
                                                
I0804 00:38:11.748129   25510 buildroot.go:70] root file system type: tmpfs
I0804 00:38:11.748260   25510 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0804 00:38:11.748288   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
I0804 00:38:11.751057   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:11.751421   25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:38:11.751455   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:11.751712   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
I0804 00:38:11.751977   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:38:11.752136   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:38:11.752311   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
I0804 00:38:11.752476   25510 main.go:141] libmachine: Using SSH client type: native
I0804 00:38:11.752674   25510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
I0804 00:38:11.752768   25510 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket 
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP \$MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0804 00:38:11.885782   25510 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket 
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target

                                                
                                                
I0804 00:38:11.885830   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
I0804 00:38:11.888701   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:11.889069   25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:38:11.889098   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:11.889241   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
I0804 00:38:11.889427   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:38:11.889701   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:38:11.889860   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
I0804 00:38:11.890052   25510 main.go:141] libmachine: Using SSH client type: native
I0804 00:38:11.890250   25510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
I0804 00:38:11.890274   25510 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0804 00:38:13.843420   25510 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.

                                                
                                                
I0804 00:38:13.843461   25510 machine.go:97] duration metric: took 3.067856975s to provisionDockerMachine
I0804 00:38:13.843473   25510 start.go:293] postStartSetup for "ha-230158-m02" (driver="kvm2")
I0804 00:38:13.843482   25510 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0804 00:38:13.843498   25510 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
I0804 00:38:13.843800   25510 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0804 00:38:13.843831   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
I0804 00:38:13.846779   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:13.847277   25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:38:13.847305   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:13.847479   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
I0804 00:38:13.847712   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:38:13.847892   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
I0804 00:38:13.848015   25510 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/id_rsa Username:docker}
I0804 00:38:13.937619   25510 ssh_runner.go:195] Run: cat /etc/os-release
I0804 00:38:13.941892   25510 info.go:137] Remote host: Buildroot 2023.02.9
I0804 00:38:13.941913   25510 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-3947/.minikube/addons for local assets ...
I0804 00:38:13.941999   25510 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-3947/.minikube/files for local assets ...
I0804 00:38:13.942104   25510 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-3947/.minikube/files/etc/ssl/certs/111362.pem -> 111362.pem in /etc/ssl/certs
I0804 00:38:13.942117   25510 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/files/etc/ssl/certs/111362.pem -> /etc/ssl/certs/111362.pem
I0804 00:38:13.942261   25510 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0804 00:38:13.952175   25510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/files/etc/ssl/certs/111362.pem --> /etc/ssl/certs/111362.pem (1708 bytes)
I0804 00:38:13.976761   25510 start.go:296] duration metric: took 133.275449ms for postStartSetup
I0804 00:38:13.976800   25510 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
I0804 00:38:13.977069   25510 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
I0804 00:38:13.977090   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
I0804 00:38:13.980173   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:13.980544   25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:38:13.980596   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:13.980800   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
I0804 00:38:13.981072   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:38:13.981269   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
I0804 00:38:13.981412   25510 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/id_rsa Username:docker}
I0804 00:38:14.071182   25510 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
I0804 00:38:14.071254   25510 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
I0804 00:38:14.130544   25510 fix.go:56] duration metric: took 15.774500667s for fixHost
I0804 00:38:14.130591   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
I0804 00:38:14.133406   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:14.133762   25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:38:14.133788   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:14.133983   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
I0804 00:38:14.134181   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:38:14.134372   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:38:14.134501   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
I0804 00:38:14.134694   25510 main.go:141] libmachine: Using SSH client type: native
I0804 00:38:14.134887   25510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
I0804 00:38:14.134901   25510 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0804 00:38:14.255857   25510 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722731894.223692124

                                                
                                                
I0804 00:38:14.255885   25510 fix.go:216] guest clock: 1722731894.223692124
I0804 00:38:14.255908   25510 fix.go:229] Guest: 2024-08-04 00:38:14.223692124 +0000 UTC Remote: 2024-08-04 00:38:14.130571736 +0000 UTC m=+15.831243026 (delta=93.120388ms)
I0804 00:38:14.255935   25510 fix.go:200] guest clock delta is within tolerance: 93.120388ms
I0804 00:38:14.255944   25510 start.go:83] releasing machines lock for "ha-230158-m02", held for 15.899924306s
I0804 00:38:14.255973   25510 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
I0804 00:38:14.256217   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetIP
I0804 00:38:14.258949   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:14.259352   25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:38:14.259371   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:14.259571   25510 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
I0804 00:38:14.260000   25510 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
I0804 00:38:14.260224   25510 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
I0804 00:38:14.260339   25510 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0804 00:38:14.260409   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
I0804 00:38:14.260481   25510 ssh_runner.go:195] Run: systemctl --version
I0804 00:38:14.260503   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
I0804 00:38:14.263324   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:14.263556   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:14.263723   25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:38:14.263748   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:14.263884   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
I0804 00:38:14.264008   25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:38:14.264031   25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:14.264072   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:38:14.264152   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
I0804 00:38:14.264243   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
I0804 00:38:14.264326   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:38:14.264387   25510 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/id_rsa Username:docker}
I0804 00:38:14.264474   25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
I0804 00:38:14.264609   25510 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/id_rsa Username:docker}
I0804 00:38:14.371161   25510 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0804 00:38:14.376988   25510 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0804 00:38:14.377057   25510 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0804 00:38:14.397803   25510 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0804 00:38:14.397829   25510 start.go:495] detecting cgroup driver to use...
I0804 00:38:14.397967   25510 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0804 00:38:14.420340   25510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0804 00:38:14.432632   25510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0804 00:38:14.444438   25510 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0804 00:38:14.444485   25510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0804 00:38:14.455993   25510 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0804 00:38:14.468484   25510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0804 00:38:14.480157   25510 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0804 00:38:14.492396   25510 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0804 00:38:14.503333   25510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0804 00:38:14.513683   25510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0804 00:38:14.524306   25510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0804 00:38:14.534845   25510 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0804 00:38:14.546058   25510 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0804 00:38:14.556163   25510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0804 00:38:14.675840   25510 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0804 00:38:14.702706   25510 start.go:495] detecting cgroup driver to use...
I0804 00:38:14.702806   25510 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0804 00:38:14.725870   25510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0804 00:38:14.744691   25510 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0804 00:38:14.775797   25510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0804 00:38:14.789716   25510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0804 00:38:14.802691   25510 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0804 00:38:14.826208   25510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0804 00:38:14.839810   25510 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0804 00:38:14.859860   25510 ssh_runner.go:195] Run: which cri-dockerd
I0804 00:38:14.864004   25510 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0804 00:38:14.873703   25510 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0804 00:38:14.891236   25510 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0804 00:38:15.012240   25510 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0804 00:38:15.137153   25510 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0804 00:38:15.137313   25510 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0804 00:38:15.155559   25510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0804 00:38:15.276327   25510 ssh_runner.go:195] Run: sudo systemctl restart docker
I0804 00:39:16.351082   25510 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.074721712s)
I0804 00:39:16.351156   25510 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
I0804 00:39:16.372451   25510 out.go:177] 
W0804 00:39:16.373746   25510 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
stdout:

                                                
                                                
stderr:
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.

                                                
                                                
sudo journalctl --no-pager -u docker:
-- stdout --
Aug 04 00:38:12 ha-230158-m02 systemd[1]: Starting Docker Application Container Engine...
Aug 04 00:38:12 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:12.326177741Z" level=info msg="Starting up"
Aug 04 00:38:12 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:12.327119521Z" level=info msg="containerd not running, starting managed containerd"
Aug 04 00:38:12 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:12.328077611Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=495
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.357083625Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.380119843Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.380244399Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.380327326Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.380365537Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.380659854Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.380746850Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.380936636Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.380980166Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.381089469Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.381129276Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.381357657Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.381722077Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.383943023Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.384068421Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.384246838Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.384299443Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.384545997Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.384617831Z" level=info msg="metadata content store policy set" policy=shared
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388127474Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388219544Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388276421Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388319410Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388361671Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388455180Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388694738Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388804208Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388845843Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388892231Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388935349Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388976334Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389099850Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389142923Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389183640Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389240347Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389279107Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389315090Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389370248Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389408112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389451331Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389494375Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389530635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389577103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389617512Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389658338Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389704850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389746329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389781917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389817387Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389854329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389893335Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389945127Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389981949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390070588Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390151066Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390196084Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390231931Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390268726Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390302779Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390339825Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390382329Z" level=info msg="NRI interface is disabled by configuration."
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390645097Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390719485Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390779483Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390823688Z" level=info msg="containerd successfully booted in 0.035317s"
Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.355694047Z" level=info msg="[graphdriver] trying configured driver: overlay2"
Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.417198292Z" level=info msg="Loading containers: start."
Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.603908628Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.697697573Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.760132523Z" level=info msg="Loading containers: done."
Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.774708591Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.775080161Z" level=info msg="Daemon has completed initialization"
Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.809171865Z" level=info msg="API listen on /var/run/docker.sock"
Aug 04 00:38:13 ha-230158-m02 systemd[1]: Started Docker Application Container Engine.
Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.809357764Z" level=info msg="API listen on [::]:2376"
Aug 04 00:38:15 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:15.262432246Z" level=info msg="Processing signal 'terminated'"
Aug 04 00:38:15 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:15.264385339Z" level=info msg="Daemon shutdown complete"
Aug 04 00:38:15 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:15.264545438Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
Aug 04 00:38:15 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:15.264639728Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
Aug 04 00:38:15 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:15.265397657Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
Aug 04 00:38:15 ha-230158-m02 systemd[1]: Stopping Docker Application Container Engine...
Aug 04 00:38:16 ha-230158-m02 systemd[1]: docker.service: Deactivated successfully.
Aug 04 00:38:16 ha-230158-m02 systemd[1]: Stopped Docker Application Container Engine.
Aug 04 00:38:16 ha-230158-m02 systemd[1]: Starting Docker Application Container Engine...
Aug 04 00:38:16 ha-230158-m02 dockerd[1098]: time="2024-08-04T00:38:16.310736920Z" level=info msg="Starting up"
Aug 04 00:39:16 ha-230158-m02 dockerd[1098]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Aug 04 00:39:16 ha-230158-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Aug 04 00:39:16 ha-230158-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
Aug 04 00:39:16 ha-230158-m02 systemd[1]: Failed to start Docker Application Container Engine.

                                                
                                                
-- /stdout --
X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
stdout:

                                                
                                                
stderr:
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.

                                                
                                                
sudo journalctl --no-pager -u docker:
-- stdout --
Aug 04 00:38:12 ha-230158-m02 systemd[1]: Starting Docker Application Container Engine...
Aug 04 00:38:12 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:12.326177741Z" level=info msg="Starting up"
Aug 04 00:38:12 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:12.327119521Z" level=info msg="containerd not running, starting managed containerd"
Aug 04 00:38:12 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:12.328077611Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=495
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.357083625Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.380119843Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.380244399Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.380327326Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.380365537Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.380659854Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.380746850Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.380936636Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.380980166Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.381089469Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.381129276Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.381357657Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.381722077Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.383943023Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.384068421Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.384246838Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.384299443Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.384545997Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.384617831Z" level=info msg="metadata content store policy set" policy=shared
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388127474Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388219544Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388276421Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388319410Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388361671Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388455180Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388694738Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388804208Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388845843Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388892231Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388935349Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388976334Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389099850Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389142923Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389183640Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389240347Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389279107Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389315090Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389370248Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389408112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389451331Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389494375Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389530635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389577103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389617512Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389658338Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389704850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389746329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389781917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389817387Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389854329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389893335Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389945127Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389981949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390070588Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390151066Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390196084Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390231931Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390268726Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390302779Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390339825Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390382329Z" level=info msg="NRI interface is disabled by configuration."
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390645097Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390719485Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390779483Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390823688Z" level=info msg="containerd successfully booted in 0.035317s"
Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.355694047Z" level=info msg="[graphdriver] trying configured driver: overlay2"
Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.417198292Z" level=info msg="Loading containers: start."
Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.603908628Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.697697573Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.760132523Z" level=info msg="Loading containers: done."
Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.774708591Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.775080161Z" level=info msg="Daemon has completed initialization"
Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.809171865Z" level=info msg="API listen on /var/run/docker.sock"
Aug 04 00:38:13 ha-230158-m02 systemd[1]: Started Docker Application Container Engine.
Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.809357764Z" level=info msg="API listen on [::]:2376"
Aug 04 00:38:15 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:15.262432246Z" level=info msg="Processing signal 'terminated'"
Aug 04 00:38:15 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:15.264385339Z" level=info msg="Daemon shutdown complete"
Aug 04 00:38:15 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:15.264545438Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
Aug 04 00:38:15 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:15.264639728Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
Aug 04 00:38:15 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:15.265397657Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
Aug 04 00:38:15 ha-230158-m02 systemd[1]: Stopping Docker Application Container Engine...
Aug 04 00:38:16 ha-230158-m02 systemd[1]: docker.service: Deactivated successfully.
Aug 04 00:38:16 ha-230158-m02 systemd[1]: Stopped Docker Application Container Engine.
Aug 04 00:38:16 ha-230158-m02 systemd[1]: Starting Docker Application Container Engine...
Aug 04 00:38:16 ha-230158-m02 dockerd[1098]: time="2024-08-04T00:38:16.310736920Z" level=info msg="Starting up"
Aug 04 00:39:16 ha-230158-m02 dockerd[1098]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Aug 04 00:39:16 ha-230158-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Aug 04 00:39:16 ha-230158-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
Aug 04 00:39:16 ha-230158-m02 systemd[1]: Failed to start Docker Application Container Engine.

                                                
                                                
-- /stdout --
W0804 00:39:16.373803   25510 out.go:239] * 
* 
W0804 00:39:16.376664   25510 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0804 00:39:16.378346   25510 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-linux-amd64 -p ha-230158 node start m02 -v=7 --alsologtostderr": exit status 90
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-230158 status -v=7 --alsologtostderr: exit status 2 (781.074493ms)

                                                
                                                
-- stdout --
	ha-230158
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-230158-m02
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-230158-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-230158-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0804 00:39:16.446979   25881 out.go:291] Setting OutFile to fd 1 ...
	I0804 00:39:16.447079   25881 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:39:16.447084   25881 out.go:304] Setting ErrFile to fd 2...
	I0804 00:39:16.447088   25881 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:39:16.447276   25881 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-3947/.minikube/bin
	I0804 00:39:16.447427   25881 out.go:298] Setting JSON to false
	I0804 00:39:16.447445   25881 mustload.go:65] Loading cluster: ha-230158
	I0804 00:39:16.447491   25881 notify.go:220] Checking for updates...
	I0804 00:39:16.447784   25881 config.go:182] Loaded profile config "ha-230158": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0804 00:39:16.447796   25881 status.go:255] checking status of ha-230158 ...
	I0804 00:39:16.448152   25881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:16.448219   25881 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:16.468500   25881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34607
	I0804 00:39:16.468919   25881 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:16.469528   25881 main.go:141] libmachine: Using API Version  1
	I0804 00:39:16.469555   25881 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:16.469938   25881 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:16.470298   25881 main.go:141] libmachine: (ha-230158) Calling .GetState
	I0804 00:39:16.471874   25881 status.go:330] ha-230158 host status = "Running" (err=<nil>)
	I0804 00:39:16.471887   25881 host.go:66] Checking if "ha-230158" exists ...
	I0804 00:39:16.472191   25881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:16.472230   25881 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:16.487173   25881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42743
	I0804 00:39:16.487542   25881 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:16.487933   25881 main.go:141] libmachine: Using API Version  1
	I0804 00:39:16.487959   25881 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:16.488272   25881 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:16.488445   25881 main.go:141] libmachine: (ha-230158) Calling .GetIP
	I0804 00:39:16.491622   25881 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:39:16.492137   25881 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
	I0804 00:39:16.492161   25881 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:39:16.492241   25881 host.go:66] Checking if "ha-230158" exists ...
	I0804 00:39:16.492553   25881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:16.492592   25881 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:16.508012   25881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44043
	I0804 00:39:16.508448   25881 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:16.508941   25881 main.go:141] libmachine: Using API Version  1
	I0804 00:39:16.508965   25881 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:16.509300   25881 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:16.509492   25881 main.go:141] libmachine: (ha-230158) Calling .DriverName
	I0804 00:39:16.509698   25881 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 00:39:16.509729   25881 main.go:141] libmachine: (ha-230158) Calling .GetSSHHostname
	I0804 00:39:16.512943   25881 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:39:16.513430   25881 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
	I0804 00:39:16.513483   25881 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:39:16.513619   25881 main.go:141] libmachine: (ha-230158) Calling .GetSSHPort
	I0804 00:39:16.513797   25881 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
	I0804 00:39:16.513929   25881 main.go:141] libmachine: (ha-230158) Calling .GetSSHUsername
	I0804 00:39:16.514104   25881 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158/id_rsa Username:docker}
	I0804 00:39:16.603638   25881 ssh_runner.go:195] Run: systemctl --version
	I0804 00:39:16.611500   25881 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:39:16.628752   25881 kubeconfig.go:125] found "ha-230158" server: "https://192.168.39.254:8443"
	I0804 00:39:16.628773   25881 api_server.go:166] Checking apiserver status ...
	I0804 00:39:16.628801   25881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:39:16.643175   25881 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2004/cgroup
	W0804 00:39:16.652817   25881 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2004/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0804 00:39:16.652856   25881 ssh_runner.go:195] Run: ls
	I0804 00:39:16.658780   25881 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0804 00:39:16.665624   25881 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0804 00:39:16.665647   25881 status.go:422] ha-230158 apiserver status = Running (err=<nil>)
	I0804 00:39:16.665659   25881 status.go:257] ha-230158 status: &{Name:ha-230158 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 00:39:16.665691   25881 status.go:255] checking status of ha-230158-m02 ...
	I0804 00:39:16.666098   25881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:16.666148   25881 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:16.680590   25881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34925
	I0804 00:39:16.680985   25881 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:16.681474   25881 main.go:141] libmachine: Using API Version  1
	I0804 00:39:16.681491   25881 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:16.681777   25881 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:16.681934   25881 main.go:141] libmachine: (ha-230158-m02) Calling .GetState
	I0804 00:39:16.683714   25881 status.go:330] ha-230158-m02 host status = "Running" (err=<nil>)
	I0804 00:39:16.683732   25881 host.go:66] Checking if "ha-230158-m02" exists ...
	I0804 00:39:16.683996   25881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:16.684026   25881 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:16.699143   25881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37313
	I0804 00:39:16.699452   25881 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:16.699868   25881 main.go:141] libmachine: Using API Version  1
	I0804 00:39:16.699891   25881 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:16.700160   25881 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:16.700346   25881 main.go:141] libmachine: (ha-230158-m02) Calling .GetIP
	I0804 00:39:16.702734   25881 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:39:16.703088   25881 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
	I0804 00:39:16.703117   25881 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:39:16.703262   25881 host.go:66] Checking if "ha-230158-m02" exists ...
	I0804 00:39:16.703571   25881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:16.703616   25881 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:16.718026   25881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37103
	I0804 00:39:16.718394   25881 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:16.718848   25881 main.go:141] libmachine: Using API Version  1
	I0804 00:39:16.718870   25881 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:16.719188   25881 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:16.719393   25881 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
	I0804 00:39:16.719606   25881 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 00:39:16.719626   25881 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
	I0804 00:39:16.722480   25881 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:39:16.722887   25881 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
	I0804 00:39:16.722919   25881 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:39:16.723104   25881 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
	I0804 00:39:16.723301   25881 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
	I0804 00:39:16.723454   25881 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
	I0804 00:39:16.723609   25881 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/id_rsa Username:docker}
	I0804 00:39:16.810346   25881 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:39:16.825253   25881 kubeconfig.go:125] found "ha-230158" server: "https://192.168.39.254:8443"
	I0804 00:39:16.825281   25881 api_server.go:166] Checking apiserver status ...
	I0804 00:39:16.825313   25881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0804 00:39:16.838609   25881 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0804 00:39:16.838634   25881 status.go:422] ha-230158-m02 apiserver status = Stopped (err=<nil>)
	I0804 00:39:16.838645   25881 status.go:257] ha-230158-m02 status: &{Name:ha-230158-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 00:39:16.838664   25881 status.go:255] checking status of ha-230158-m03 ...
	I0804 00:39:16.838975   25881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:16.839022   25881 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:16.854182   25881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41227
	I0804 00:39:16.854754   25881 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:16.855289   25881 main.go:141] libmachine: Using API Version  1
	I0804 00:39:16.855321   25881 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:16.855724   25881 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:16.855957   25881 main.go:141] libmachine: (ha-230158-m03) Calling .GetState
	I0804 00:39:16.857297   25881 status.go:330] ha-230158-m03 host status = "Running" (err=<nil>)
	I0804 00:39:16.857310   25881 host.go:66] Checking if "ha-230158-m03" exists ...
	I0804 00:39:16.857676   25881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:16.857712   25881 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:16.874346   25881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33937
	I0804 00:39:16.874810   25881 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:16.875294   25881 main.go:141] libmachine: Using API Version  1
	I0804 00:39:16.875318   25881 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:16.875628   25881 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:16.875779   25881 main.go:141] libmachine: (ha-230158-m03) Calling .GetIP
	I0804 00:39:16.878442   25881 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:39:16.878952   25881 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
	I0804 00:39:16.878974   25881 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:39:16.879136   25881 host.go:66] Checking if "ha-230158-m03" exists ...
	I0804 00:39:16.879456   25881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:16.879491   25881 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:16.893139   25881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41631
	I0804 00:39:16.893546   25881 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:16.894036   25881 main.go:141] libmachine: Using API Version  1
	I0804 00:39:16.894053   25881 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:16.894322   25881 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:16.894489   25881 main.go:141] libmachine: (ha-230158-m03) Calling .DriverName
	I0804 00:39:16.894645   25881 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 00:39:16.894675   25881 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHHostname
	I0804 00:39:16.897190   25881 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:39:16.897605   25881 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
	I0804 00:39:16.897632   25881 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:39:16.897750   25881 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHPort
	I0804 00:39:16.897879   25881 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHKeyPath
	I0804 00:39:16.898027   25881 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHUsername
	I0804 00:39:16.898194   25881 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m03/id_rsa Username:docker}
	I0804 00:39:16.978345   25881 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:39:16.995749   25881 kubeconfig.go:125] found "ha-230158" server: "https://192.168.39.254:8443"
	I0804 00:39:16.995778   25881 api_server.go:166] Checking apiserver status ...
	I0804 00:39:16.995815   25881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:39:17.010926   25881 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1971/cgroup
	W0804 00:39:17.020984   25881 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1971/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0804 00:39:17.021031   25881 ssh_runner.go:195] Run: ls
	I0804 00:39:17.025361   25881 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0804 00:39:17.029695   25881 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0804 00:39:17.029718   25881 status.go:422] ha-230158-m03 apiserver status = Running (err=<nil>)
	I0804 00:39:17.029729   25881 status.go:257] ha-230158-m03 status: &{Name:ha-230158-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 00:39:17.029747   25881 status.go:255] checking status of ha-230158-m04 ...
	I0804 00:39:17.030118   25881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:17.030154   25881 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:17.044802   25881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35331
	I0804 00:39:17.045168   25881 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:17.045630   25881 main.go:141] libmachine: Using API Version  1
	I0804 00:39:17.045659   25881 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:17.045976   25881 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:17.046258   25881 main.go:141] libmachine: (ha-230158-m04) Calling .GetState
	I0804 00:39:17.047772   25881 status.go:330] ha-230158-m04 host status = "Running" (err=<nil>)
	I0804 00:39:17.047789   25881 host.go:66] Checking if "ha-230158-m04" exists ...
	I0804 00:39:17.048051   25881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:17.048079   25881 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:17.061807   25881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38977
	I0804 00:39:17.062197   25881 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:17.062732   25881 main.go:141] libmachine: Using API Version  1
	I0804 00:39:17.062766   25881 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:17.063059   25881 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:17.063257   25881 main.go:141] libmachine: (ha-230158-m04) Calling .GetIP
	I0804 00:39:17.066106   25881 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
	I0804 00:39:17.066536   25881 main.go:141] libmachine: (ha-230158-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:b8:2e", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:36:42 +0000 UTC Type:0 Mac:52:54:00:eb:b8:2e Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-230158-m04 Clientid:01:52:54:00:eb:b8:2e}
	I0804 00:39:17.066560   25881 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined IP address 192.168.39.165 and MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
	I0804 00:39:17.066695   25881 host.go:66] Checking if "ha-230158-m04" exists ...
	I0804 00:39:17.066973   25881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:17.067002   25881 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:17.080507   25881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43833
	I0804 00:39:17.080875   25881 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:17.081289   25881 main.go:141] libmachine: Using API Version  1
	I0804 00:39:17.081307   25881 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:17.081568   25881 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:17.081725   25881 main.go:141] libmachine: (ha-230158-m04) Calling .DriverName
	I0804 00:39:17.081874   25881 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 00:39:17.081901   25881 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHHostname
	I0804 00:39:17.084272   25881 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
	I0804 00:39:17.084606   25881 main.go:141] libmachine: (ha-230158-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:b8:2e", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:36:42 +0000 UTC Type:0 Mac:52:54:00:eb:b8:2e Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-230158-m04 Clientid:01:52:54:00:eb:b8:2e}
	I0804 00:39:17.084630   25881 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined IP address 192.168.39.165 and MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
	I0804 00:39:17.084769   25881 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHPort
	I0804 00:39:17.084936   25881 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHKeyPath
	I0804 00:39:17.085083   25881 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHUsername
	I0804 00:39:17.085210   25881 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m04/id_rsa Username:docker}
	I0804 00:39:17.166318   25881 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:39:17.182795   25881 status.go:257] ha-230158-m04 status: &{Name:ha-230158-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-230158 status -v=7 --alsologtostderr: exit status 2 (760.917148ms)

                                                
                                                
-- stdout --
	ha-230158
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-230158-m02
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-230158-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-230158-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0804 00:39:18.188252   25967 out.go:291] Setting OutFile to fd 1 ...
	I0804 00:39:18.188365   25967 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:39:18.188375   25967 out.go:304] Setting ErrFile to fd 2...
	I0804 00:39:18.188382   25967 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:39:18.188646   25967 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-3947/.minikube/bin
	I0804 00:39:18.188890   25967 out.go:298] Setting JSON to false
	I0804 00:39:18.188920   25967 mustload.go:65] Loading cluster: ha-230158
	I0804 00:39:18.189040   25967 notify.go:220] Checking for updates...
	I0804 00:39:18.189448   25967 config.go:182] Loaded profile config "ha-230158": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0804 00:39:18.189470   25967 status.go:255] checking status of ha-230158 ...
	I0804 00:39:18.190066   25967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:18.190115   25967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:18.209263   25967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36127
	I0804 00:39:18.209647   25967 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:18.210275   25967 main.go:141] libmachine: Using API Version  1
	I0804 00:39:18.210304   25967 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:18.210658   25967 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:18.210882   25967 main.go:141] libmachine: (ha-230158) Calling .GetState
	I0804 00:39:18.212547   25967 status.go:330] ha-230158 host status = "Running" (err=<nil>)
	I0804 00:39:18.212563   25967 host.go:66] Checking if "ha-230158" exists ...
	I0804 00:39:18.212882   25967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:18.212930   25967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:18.228301   25967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35289
	I0804 00:39:18.228675   25967 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:18.229159   25967 main.go:141] libmachine: Using API Version  1
	I0804 00:39:18.229184   25967 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:18.229553   25967 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:18.229754   25967 main.go:141] libmachine: (ha-230158) Calling .GetIP
	I0804 00:39:18.232553   25967 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:39:18.233020   25967 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
	I0804 00:39:18.233053   25967 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:39:18.233150   25967 host.go:66] Checking if "ha-230158" exists ...
	I0804 00:39:18.233451   25967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:18.233510   25967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:18.249422   25967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41195
	I0804 00:39:18.249854   25967 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:18.250260   25967 main.go:141] libmachine: Using API Version  1
	I0804 00:39:18.250283   25967 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:18.250629   25967 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:18.250793   25967 main.go:141] libmachine: (ha-230158) Calling .DriverName
	I0804 00:39:18.251014   25967 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 00:39:18.251044   25967 main.go:141] libmachine: (ha-230158) Calling .GetSSHHostname
	I0804 00:39:18.253831   25967 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:39:18.254271   25967 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
	I0804 00:39:18.254307   25967 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:39:18.254398   25967 main.go:141] libmachine: (ha-230158) Calling .GetSSHPort
	I0804 00:39:18.254557   25967 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
	I0804 00:39:18.254701   25967 main.go:141] libmachine: (ha-230158) Calling .GetSSHUsername
	I0804 00:39:18.254822   25967 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158/id_rsa Username:docker}
	I0804 00:39:18.337965   25967 ssh_runner.go:195] Run: systemctl --version
	I0804 00:39:18.346606   25967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:39:18.363492   25967 kubeconfig.go:125] found "ha-230158" server: "https://192.168.39.254:8443"
	I0804 00:39:18.363519   25967 api_server.go:166] Checking apiserver status ...
	I0804 00:39:18.363557   25967 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:39:18.377960   25967 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2004/cgroup
	W0804 00:39:18.388019   25967 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2004/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0804 00:39:18.388064   25967 ssh_runner.go:195] Run: ls
	I0804 00:39:18.392471   25967 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0804 00:39:18.399721   25967 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0804 00:39:18.399739   25967 status.go:422] ha-230158 apiserver status = Running (err=<nil>)
	I0804 00:39:18.399749   25967 status.go:257] ha-230158 status: &{Name:ha-230158 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 00:39:18.399770   25967 status.go:255] checking status of ha-230158-m02 ...
	I0804 00:39:18.400150   25967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:18.400190   25967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:18.415591   25967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37623
	I0804 00:39:18.415950   25967 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:18.416423   25967 main.go:141] libmachine: Using API Version  1
	I0804 00:39:18.416438   25967 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:18.416734   25967 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:18.416900   25967 main.go:141] libmachine: (ha-230158-m02) Calling .GetState
	I0804 00:39:18.418596   25967 status.go:330] ha-230158-m02 host status = "Running" (err=<nil>)
	I0804 00:39:18.418615   25967 host.go:66] Checking if "ha-230158-m02" exists ...
	I0804 00:39:18.418890   25967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:18.418921   25967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:18.433121   25967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44637
	I0804 00:39:18.433544   25967 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:18.433926   25967 main.go:141] libmachine: Using API Version  1
	I0804 00:39:18.433950   25967 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:18.434311   25967 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:18.434518   25967 main.go:141] libmachine: (ha-230158-m02) Calling .GetIP
	I0804 00:39:18.437210   25967 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:39:18.437714   25967 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
	I0804 00:39:18.437752   25967 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:39:18.437812   25967 host.go:66] Checking if "ha-230158-m02" exists ...
	I0804 00:39:18.438099   25967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:18.438130   25967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:18.452557   25967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39741
	I0804 00:39:18.452973   25967 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:18.453492   25967 main.go:141] libmachine: Using API Version  1
	I0804 00:39:18.453513   25967 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:18.453785   25967 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:18.453969   25967 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
	I0804 00:39:18.454140   25967 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 00:39:18.454162   25967 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
	I0804 00:39:18.456937   25967 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:39:18.457323   25967 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
	I0804 00:39:18.457349   25967 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:39:18.457478   25967 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
	I0804 00:39:18.457623   25967 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
	I0804 00:39:18.457772   25967 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
	I0804 00:39:18.457947   25967 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/id_rsa Username:docker}
	I0804 00:39:18.541483   25967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:39:18.557391   25967 kubeconfig.go:125] found "ha-230158" server: "https://192.168.39.254:8443"
	I0804 00:39:18.557416   25967 api_server.go:166] Checking apiserver status ...
	I0804 00:39:18.557462   25967 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0804 00:39:18.569932   25967 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0804 00:39:18.569965   25967 status.go:422] ha-230158-m02 apiserver status = Stopped (err=<nil>)
	I0804 00:39:18.569977   25967 status.go:257] ha-230158-m02 status: &{Name:ha-230158-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 00:39:18.570006   25967 status.go:255] checking status of ha-230158-m03 ...
	I0804 00:39:18.570400   25967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:18.570440   25967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:18.585174   25967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35833
	I0804 00:39:18.585573   25967 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:18.586012   25967 main.go:141] libmachine: Using API Version  1
	I0804 00:39:18.586032   25967 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:18.586385   25967 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:18.586578   25967 main.go:141] libmachine: (ha-230158-m03) Calling .GetState
	I0804 00:39:18.588082   25967 status.go:330] ha-230158-m03 host status = "Running" (err=<nil>)
	I0804 00:39:18.588095   25967 host.go:66] Checking if "ha-230158-m03" exists ...
	I0804 00:39:18.588359   25967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:18.588386   25967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:18.603130   25967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39793
	I0804 00:39:18.603535   25967 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:18.603993   25967 main.go:141] libmachine: Using API Version  1
	I0804 00:39:18.604016   25967 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:18.604355   25967 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:18.604544   25967 main.go:141] libmachine: (ha-230158-m03) Calling .GetIP
	I0804 00:39:18.607076   25967 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:39:18.607445   25967 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
	I0804 00:39:18.607481   25967 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:39:18.607599   25967 host.go:66] Checking if "ha-230158-m03" exists ...
	I0804 00:39:18.607873   25967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:18.607902   25967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:18.621737   25967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41985
	I0804 00:39:18.622113   25967 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:18.622558   25967 main.go:141] libmachine: Using API Version  1
	I0804 00:39:18.622579   25967 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:18.622937   25967 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:18.623090   25967 main.go:141] libmachine: (ha-230158-m03) Calling .DriverName
	I0804 00:39:18.623313   25967 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 00:39:18.623340   25967 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHHostname
	I0804 00:39:18.626310   25967 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:39:18.626805   25967 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
	I0804 00:39:18.626831   25967 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:39:18.626966   25967 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHPort
	I0804 00:39:18.627142   25967 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHKeyPath
	I0804 00:39:18.627355   25967 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHUsername
	I0804 00:39:18.627520   25967 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m03/id_rsa Username:docker}
	I0804 00:39:18.705732   25967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:39:18.721205   25967 kubeconfig.go:125] found "ha-230158" server: "https://192.168.39.254:8443"
	I0804 00:39:18.721229   25967 api_server.go:166] Checking apiserver status ...
	I0804 00:39:18.721259   25967 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:39:18.736058   25967 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1971/cgroup
	W0804 00:39:18.746379   25967 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1971/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0804 00:39:18.746429   25967 ssh_runner.go:195] Run: ls
	I0804 00:39:18.750833   25967 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0804 00:39:18.755000   25967 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0804 00:39:18.755021   25967 status.go:422] ha-230158-m03 apiserver status = Running (err=<nil>)
	I0804 00:39:18.755029   25967 status.go:257] ha-230158-m03 status: &{Name:ha-230158-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 00:39:18.755046   25967 status.go:255] checking status of ha-230158-m04 ...
	I0804 00:39:18.755408   25967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:18.755457   25967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:18.770168   25967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44465
	I0804 00:39:18.770620   25967 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:18.771073   25967 main.go:141] libmachine: Using API Version  1
	I0804 00:39:18.771094   25967 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:18.771408   25967 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:18.771608   25967 main.go:141] libmachine: (ha-230158-m04) Calling .GetState
	I0804 00:39:18.773243   25967 status.go:330] ha-230158-m04 host status = "Running" (err=<nil>)
	I0804 00:39:18.773264   25967 host.go:66] Checking if "ha-230158-m04" exists ...
	I0804 00:39:18.773580   25967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:18.773614   25967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:18.788564   25967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39011
	I0804 00:39:18.788985   25967 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:18.789464   25967 main.go:141] libmachine: Using API Version  1
	I0804 00:39:18.789486   25967 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:18.789825   25967 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:18.790021   25967 main.go:141] libmachine: (ha-230158-m04) Calling .GetIP
	I0804 00:39:18.792979   25967 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
	I0804 00:39:18.793396   25967 main.go:141] libmachine: (ha-230158-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:b8:2e", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:36:42 +0000 UTC Type:0 Mac:52:54:00:eb:b8:2e Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-230158-m04 Clientid:01:52:54:00:eb:b8:2e}
	I0804 00:39:18.793431   25967 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined IP address 192.168.39.165 and MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
	I0804 00:39:18.793575   25967 host.go:66] Checking if "ha-230158-m04" exists ...
	I0804 00:39:18.793878   25967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:18.793929   25967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:18.809117   25967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37695
	I0804 00:39:18.809562   25967 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:18.809995   25967 main.go:141] libmachine: Using API Version  1
	I0804 00:39:18.810013   25967 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:18.810342   25967 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:18.810546   25967 main.go:141] libmachine: (ha-230158-m04) Calling .DriverName
	I0804 00:39:18.810747   25967 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 00:39:18.810768   25967 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHHostname
	I0804 00:39:18.813507   25967 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
	I0804 00:39:18.813999   25967 main.go:141] libmachine: (ha-230158-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:b8:2e", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:36:42 +0000 UTC Type:0 Mac:52:54:00:eb:b8:2e Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-230158-m04 Clientid:01:52:54:00:eb:b8:2e}
	I0804 00:39:18.814023   25967 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined IP address 192.168.39.165 and MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
	I0804 00:39:18.814172   25967 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHPort
	I0804 00:39:18.814354   25967 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHKeyPath
	I0804 00:39:18.814505   25967 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHUsername
	I0804 00:39:18.814647   25967 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m04/id_rsa Username:docker}
	I0804 00:39:18.893886   25967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:39:18.907678   25967 status.go:257] ha-230158-m04 status: &{Name:ha-230158-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-230158 status -v=7 --alsologtostderr: exit status 2 (762.791458ms)

                                                
                                                
-- stdout --
	ha-230158
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-230158-m02
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-230158-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-230158-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0804 00:39:21.045311   26066 out.go:291] Setting OutFile to fd 1 ...
	I0804 00:39:21.045407   26066 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:39:21.045411   26066 out.go:304] Setting ErrFile to fd 2...
	I0804 00:39:21.045416   26066 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:39:21.045593   26066 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-3947/.minikube/bin
	I0804 00:39:21.045744   26066 out.go:298] Setting JSON to false
	I0804 00:39:21.045766   26066 mustload.go:65] Loading cluster: ha-230158
	I0804 00:39:21.045858   26066 notify.go:220] Checking for updates...
	I0804 00:39:21.046204   26066 config.go:182] Loaded profile config "ha-230158": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0804 00:39:21.046220   26066 status.go:255] checking status of ha-230158 ...
	I0804 00:39:21.046692   26066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:21.046745   26066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:21.062153   26066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46327
	I0804 00:39:21.062592   26066 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:21.063188   26066 main.go:141] libmachine: Using API Version  1
	I0804 00:39:21.063211   26066 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:21.063585   26066 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:21.063775   26066 main.go:141] libmachine: (ha-230158) Calling .GetState
	I0804 00:39:21.078972   26066 status.go:330] ha-230158 host status = "Running" (err=<nil>)
	I0804 00:39:21.078990   26066 host.go:66] Checking if "ha-230158" exists ...
	I0804 00:39:21.079274   26066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:21.079306   26066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:21.093889   26066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42955
	I0804 00:39:21.094289   26066 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:21.094749   26066 main.go:141] libmachine: Using API Version  1
	I0804 00:39:21.094773   26066 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:21.095126   26066 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:21.095372   26066 main.go:141] libmachine: (ha-230158) Calling .GetIP
	I0804 00:39:21.098319   26066 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:39:21.098871   26066 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
	I0804 00:39:21.098903   26066 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:39:21.099055   26066 host.go:66] Checking if "ha-230158" exists ...
	I0804 00:39:21.099332   26066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:21.099370   26066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:21.114420   26066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39777
	I0804 00:39:21.114780   26066 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:21.115212   26066 main.go:141] libmachine: Using API Version  1
	I0804 00:39:21.115232   26066 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:21.115527   26066 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:21.115763   26066 main.go:141] libmachine: (ha-230158) Calling .DriverName
	I0804 00:39:21.115939   26066 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 00:39:21.115970   26066 main.go:141] libmachine: (ha-230158) Calling .GetSSHHostname
	I0804 00:39:21.118755   26066 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:39:21.119175   26066 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
	I0804 00:39:21.119203   26066 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:39:21.119325   26066 main.go:141] libmachine: (ha-230158) Calling .GetSSHPort
	I0804 00:39:21.119498   26066 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
	I0804 00:39:21.119742   26066 main.go:141] libmachine: (ha-230158) Calling .GetSSHUsername
	I0804 00:39:21.119895   26066 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158/id_rsa Username:docker}
	I0804 00:39:21.202043   26066 ssh_runner.go:195] Run: systemctl --version
	I0804 00:39:21.208451   26066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:39:21.222056   26066 kubeconfig.go:125] found "ha-230158" server: "https://192.168.39.254:8443"
	I0804 00:39:21.222078   26066 api_server.go:166] Checking apiserver status ...
	I0804 00:39:21.222106   26066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:39:21.235933   26066 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2004/cgroup
	W0804 00:39:21.246225   26066 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2004/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0804 00:39:21.246292   26066 ssh_runner.go:195] Run: ls
	I0804 00:39:21.252444   26066 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0804 00:39:21.256611   26066 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0804 00:39:21.256630   26066 status.go:422] ha-230158 apiserver status = Running (err=<nil>)
	I0804 00:39:21.256638   26066 status.go:257] ha-230158 status: &{Name:ha-230158 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 00:39:21.256654   26066 status.go:255] checking status of ha-230158-m02 ...
	I0804 00:39:21.256976   26066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:21.257011   26066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:21.271632   26066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44623
	I0804 00:39:21.272048   26066 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:21.272525   26066 main.go:141] libmachine: Using API Version  1
	I0804 00:39:21.272552   26066 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:21.272876   26066 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:21.273042   26066 main.go:141] libmachine: (ha-230158-m02) Calling .GetState
	I0804 00:39:21.274436   26066 status.go:330] ha-230158-m02 host status = "Running" (err=<nil>)
	I0804 00:39:21.274453   26066 host.go:66] Checking if "ha-230158-m02" exists ...
	I0804 00:39:21.274941   26066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:21.274988   26066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:21.290540   26066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38465
	I0804 00:39:21.290896   26066 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:21.291360   26066 main.go:141] libmachine: Using API Version  1
	I0804 00:39:21.291382   26066 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:21.291682   26066 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:21.291854   26066 main.go:141] libmachine: (ha-230158-m02) Calling .GetIP
	I0804 00:39:21.294578   26066 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:39:21.294972   26066 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
	I0804 00:39:21.294992   26066 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:39:21.295161   26066 host.go:66] Checking if "ha-230158-m02" exists ...
	I0804 00:39:21.295500   26066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:21.295543   26066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:21.309619   26066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43119
	I0804 00:39:21.309957   26066 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:21.310428   26066 main.go:141] libmachine: Using API Version  1
	I0804 00:39:21.310447   26066 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:21.310847   26066 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:21.311054   26066 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
	I0804 00:39:21.311246   26066 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 00:39:21.311264   26066 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
	I0804 00:39:21.313773   26066 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:39:21.314177   26066 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
	I0804 00:39:21.314205   26066 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:39:21.314335   26066 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
	I0804 00:39:21.314481   26066 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
	I0804 00:39:21.314645   26066 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
	I0804 00:39:21.314809   26066 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/id_rsa Username:docker}
	I0804 00:39:21.397122   26066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:39:21.411805   26066 kubeconfig.go:125] found "ha-230158" server: "https://192.168.39.254:8443"
	I0804 00:39:21.411831   26066 api_server.go:166] Checking apiserver status ...
	I0804 00:39:21.411869   26066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0804 00:39:21.423600   26066 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0804 00:39:21.423618   26066 status.go:422] ha-230158-m02 apiserver status = Stopped (err=<nil>)
	I0804 00:39:21.423628   26066 status.go:257] ha-230158-m02 status: &{Name:ha-230158-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 00:39:21.423644   26066 status.go:255] checking status of ha-230158-m03 ...
	I0804 00:39:21.423961   26066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:21.424000   26066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:21.439785   26066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39717
	I0804 00:39:21.440172   26066 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:21.440702   26066 main.go:141] libmachine: Using API Version  1
	I0804 00:39:21.440727   26066 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:21.441003   26066 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:21.441203   26066 main.go:141] libmachine: (ha-230158-m03) Calling .GetState
	I0804 00:39:21.443034   26066 status.go:330] ha-230158-m03 host status = "Running" (err=<nil>)
	I0804 00:39:21.443052   26066 host.go:66] Checking if "ha-230158-m03" exists ...
	I0804 00:39:21.443460   26066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:21.443504   26066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:21.457964   26066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42657
	I0804 00:39:21.458398   26066 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:21.458838   26066 main.go:141] libmachine: Using API Version  1
	I0804 00:39:21.458862   26066 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:21.459185   26066 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:21.459386   26066 main.go:141] libmachine: (ha-230158-m03) Calling .GetIP
	I0804 00:39:21.462057   26066 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:39:21.462610   26066 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
	I0804 00:39:21.462637   26066 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:39:21.462802   26066 host.go:66] Checking if "ha-230158-m03" exists ...
	I0804 00:39:21.463175   26066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:21.463218   26066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:21.477914   26066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39659
	I0804 00:39:21.478308   26066 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:21.478749   26066 main.go:141] libmachine: Using API Version  1
	I0804 00:39:21.478771   26066 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:21.479070   26066 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:21.479284   26066 main.go:141] libmachine: (ha-230158-m03) Calling .DriverName
	I0804 00:39:21.479470   26066 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 00:39:21.479501   26066 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHHostname
	I0804 00:39:21.482188   26066 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:39:21.482567   26066 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
	I0804 00:39:21.482594   26066 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:39:21.482699   26066 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHPort
	I0804 00:39:21.482871   26066 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHKeyPath
	I0804 00:39:21.482995   26066 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHUsername
	I0804 00:39:21.483164   26066 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m03/id_rsa Username:docker}
	I0804 00:39:21.562381   26066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:39:21.578688   26066 kubeconfig.go:125] found "ha-230158" server: "https://192.168.39.254:8443"
	I0804 00:39:21.578712   26066 api_server.go:166] Checking apiserver status ...
	I0804 00:39:21.578743   26066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:39:21.595376   26066 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1971/cgroup
	W0804 00:39:21.604847   26066 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1971/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0804 00:39:21.604883   26066 ssh_runner.go:195] Run: ls
	I0804 00:39:21.609454   26066 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0804 00:39:21.613915   26066 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0804 00:39:21.613938   26066 status.go:422] ha-230158-m03 apiserver status = Running (err=<nil>)
	I0804 00:39:21.613949   26066 status.go:257] ha-230158-m03 status: &{Name:ha-230158-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 00:39:21.613967   26066 status.go:255] checking status of ha-230158-m04 ...
	I0804 00:39:21.614321   26066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:21.614361   26066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:21.628680   26066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39897
	I0804 00:39:21.629039   26066 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:21.629484   26066 main.go:141] libmachine: Using API Version  1
	I0804 00:39:21.629504   26066 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:21.629773   26066 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:21.629927   26066 main.go:141] libmachine: (ha-230158-m04) Calling .GetState
	I0804 00:39:21.631436   26066 status.go:330] ha-230158-m04 host status = "Running" (err=<nil>)
	I0804 00:39:21.631450   26066 host.go:66] Checking if "ha-230158-m04" exists ...
	I0804 00:39:21.631731   26066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:21.631783   26066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:21.646373   26066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38075
	I0804 00:39:21.646782   26066 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:21.647320   26066 main.go:141] libmachine: Using API Version  1
	I0804 00:39:21.647348   26066 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:21.647648   26066 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:21.647849   26066 main.go:141] libmachine: (ha-230158-m04) Calling .GetIP
	I0804 00:39:21.650277   26066 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
	I0804 00:39:21.650773   26066 main.go:141] libmachine: (ha-230158-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:b8:2e", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:36:42 +0000 UTC Type:0 Mac:52:54:00:eb:b8:2e Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-230158-m04 Clientid:01:52:54:00:eb:b8:2e}
	I0804 00:39:21.650807   26066 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined IP address 192.168.39.165 and MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
	I0804 00:39:21.650967   26066 host.go:66] Checking if "ha-230158-m04" exists ...
	I0804 00:39:21.651243   26066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:21.651273   26066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:21.665889   26066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33855
	I0804 00:39:21.666278   26066 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:21.666715   26066 main.go:141] libmachine: Using API Version  1
	I0804 00:39:21.666741   26066 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:21.667031   26066 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:21.667228   26066 main.go:141] libmachine: (ha-230158-m04) Calling .DriverName
	I0804 00:39:21.667407   26066 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 00:39:21.667425   26066 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHHostname
	I0804 00:39:21.670130   26066 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
	I0804 00:39:21.670550   26066 main.go:141] libmachine: (ha-230158-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:b8:2e", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:36:42 +0000 UTC Type:0 Mac:52:54:00:eb:b8:2e Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-230158-m04 Clientid:01:52:54:00:eb:b8:2e}
	I0804 00:39:21.670570   26066 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined IP address 192.168.39.165 and MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
	I0804 00:39:21.670720   26066 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHPort
	I0804 00:39:21.670914   26066 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHKeyPath
	I0804 00:39:21.671054   26066 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHUsername
	I0804 00:39:21.671197   26066 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m04/id_rsa Username:docker}
	I0804 00:39:21.749941   26066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:39:21.765829   26066 status.go:257] ha-230158-m04 status: &{Name:ha-230158-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 status -v=7 --alsologtostderr
E0804 00:39:23.753751   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/functional-168863/client.crt: no such file or directory
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-230158 status -v=7 --alsologtostderr: exit status 2 (763.502208ms)

                                                
                                                
-- stdout --
	ha-230158
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-230158-m02
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-230158-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-230158-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0804 00:39:23.260878   26151 out.go:291] Setting OutFile to fd 1 ...
	I0804 00:39:23.261149   26151 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:39:23.261156   26151 out.go:304] Setting ErrFile to fd 2...
	I0804 00:39:23.261160   26151 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:39:23.261393   26151 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-3947/.minikube/bin
	I0804 00:39:23.261600   26151 out.go:298] Setting JSON to false
	I0804 00:39:23.261622   26151 mustload.go:65] Loading cluster: ha-230158
	I0804 00:39:23.262010   26151 notify.go:220] Checking for updates...
	I0804 00:39:23.263931   26151 config.go:182] Loaded profile config "ha-230158": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0804 00:39:23.263958   26151 status.go:255] checking status of ha-230158 ...
	I0804 00:39:23.264536   26151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:23.264603   26151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:23.283993   26151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34001
	I0804 00:39:23.284514   26151 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:23.285154   26151 main.go:141] libmachine: Using API Version  1
	I0804 00:39:23.285174   26151 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:23.285513   26151 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:23.285690   26151 main.go:141] libmachine: (ha-230158) Calling .GetState
	I0804 00:39:23.287144   26151 status.go:330] ha-230158 host status = "Running" (err=<nil>)
	I0804 00:39:23.287162   26151 host.go:66] Checking if "ha-230158" exists ...
	I0804 00:39:23.287555   26151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:23.287597   26151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:23.301785   26151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40833
	I0804 00:39:23.302105   26151 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:23.302522   26151 main.go:141] libmachine: Using API Version  1
	I0804 00:39:23.302541   26151 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:23.302820   26151 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:23.303029   26151 main.go:141] libmachine: (ha-230158) Calling .GetIP
	I0804 00:39:23.305959   26151 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:39:23.306486   26151 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
	I0804 00:39:23.306517   26151 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:39:23.306825   26151 host.go:66] Checking if "ha-230158" exists ...
	I0804 00:39:23.307196   26151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:23.307237   26151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:23.321371   26151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44059
	I0804 00:39:23.321752   26151 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:23.322159   26151 main.go:141] libmachine: Using API Version  1
	I0804 00:39:23.322181   26151 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:23.322511   26151 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:23.322675   26151 main.go:141] libmachine: (ha-230158) Calling .DriverName
	I0804 00:39:23.322857   26151 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 00:39:23.322889   26151 main.go:141] libmachine: (ha-230158) Calling .GetSSHHostname
	I0804 00:39:23.325524   26151 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:39:23.325940   26151 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
	I0804 00:39:23.325961   26151 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:39:23.326111   26151 main.go:141] libmachine: (ha-230158) Calling .GetSSHPort
	I0804 00:39:23.326291   26151 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
	I0804 00:39:23.326448   26151 main.go:141] libmachine: (ha-230158) Calling .GetSSHUsername
	I0804 00:39:23.326586   26151 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158/id_rsa Username:docker}
	I0804 00:39:23.413694   26151 ssh_runner.go:195] Run: systemctl --version
	I0804 00:39:23.420782   26151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:39:23.434743   26151 kubeconfig.go:125] found "ha-230158" server: "https://192.168.39.254:8443"
	I0804 00:39:23.434769   26151 api_server.go:166] Checking apiserver status ...
	I0804 00:39:23.434803   26151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:39:23.450555   26151 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2004/cgroup
	W0804 00:39:23.459911   26151 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2004/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0804 00:39:23.459971   26151 ssh_runner.go:195] Run: ls
	I0804 00:39:23.464899   26151 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0804 00:39:23.469215   26151 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0804 00:39:23.469240   26151 status.go:422] ha-230158 apiserver status = Running (err=<nil>)
	I0804 00:39:23.469257   26151 status.go:257] ha-230158 status: &{Name:ha-230158 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 00:39:23.469276   26151 status.go:255] checking status of ha-230158-m02 ...
	I0804 00:39:23.469633   26151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:23.469673   26151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:23.484185   26151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45853
	I0804 00:39:23.484590   26151 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:23.485013   26151 main.go:141] libmachine: Using API Version  1
	I0804 00:39:23.485035   26151 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:23.485405   26151 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:23.485580   26151 main.go:141] libmachine: (ha-230158-m02) Calling .GetState
	I0804 00:39:23.487194   26151 status.go:330] ha-230158-m02 host status = "Running" (err=<nil>)
	I0804 00:39:23.487212   26151 host.go:66] Checking if "ha-230158-m02" exists ...
	I0804 00:39:23.487504   26151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:23.487540   26151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:23.501479   26151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46091
	I0804 00:39:23.501788   26151 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:23.502159   26151 main.go:141] libmachine: Using API Version  1
	I0804 00:39:23.502179   26151 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:23.502498   26151 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:23.502665   26151 main.go:141] libmachine: (ha-230158-m02) Calling .GetIP
	I0804 00:39:23.505216   26151 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:39:23.505730   26151 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
	I0804 00:39:23.505753   26151 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:39:23.505887   26151 host.go:66] Checking if "ha-230158-m02" exists ...
	I0804 00:39:23.506167   26151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:23.506205   26151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:23.520520   26151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35507
	I0804 00:39:23.520872   26151 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:23.521304   26151 main.go:141] libmachine: Using API Version  1
	I0804 00:39:23.521328   26151 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:23.521627   26151 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:23.521817   26151 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
	I0804 00:39:23.521980   26151 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 00:39:23.521999   26151 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
	I0804 00:39:23.524675   26151 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:39:23.525009   26151 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
	I0804 00:39:23.525034   26151 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:39:23.525184   26151 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
	I0804 00:39:23.525351   26151 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
	I0804 00:39:23.525519   26151 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
	I0804 00:39:23.525652   26151 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/id_rsa Username:docker}
	I0804 00:39:23.609158   26151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:39:23.622964   26151 kubeconfig.go:125] found "ha-230158" server: "https://192.168.39.254:8443"
	I0804 00:39:23.622987   26151 api_server.go:166] Checking apiserver status ...
	I0804 00:39:23.623022   26151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0804 00:39:23.634412   26151 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0804 00:39:23.634434   26151 status.go:422] ha-230158-m02 apiserver status = Stopped (err=<nil>)
	I0804 00:39:23.634446   26151 status.go:257] ha-230158-m02 status: &{Name:ha-230158-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 00:39:23.634463   26151 status.go:255] checking status of ha-230158-m03 ...
	I0804 00:39:23.634793   26151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:23.634836   26151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:23.651309   26151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39981
	I0804 00:39:23.651765   26151 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:23.652234   26151 main.go:141] libmachine: Using API Version  1
	I0804 00:39:23.652258   26151 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:23.652588   26151 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:23.652794   26151 main.go:141] libmachine: (ha-230158-m03) Calling .GetState
	I0804 00:39:23.654197   26151 status.go:330] ha-230158-m03 host status = "Running" (err=<nil>)
	I0804 00:39:23.654214   26151 host.go:66] Checking if "ha-230158-m03" exists ...
	I0804 00:39:23.654528   26151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:23.654560   26151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:23.670593   26151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43013
	I0804 00:39:23.670977   26151 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:23.671446   26151 main.go:141] libmachine: Using API Version  1
	I0804 00:39:23.671469   26151 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:23.671770   26151 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:23.671940   26151 main.go:141] libmachine: (ha-230158-m03) Calling .GetIP
	I0804 00:39:23.674482   26151 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:39:23.674861   26151 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
	I0804 00:39:23.674888   26151 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:39:23.675003   26151 host.go:66] Checking if "ha-230158-m03" exists ...
	I0804 00:39:23.675310   26151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:23.675353   26151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:23.690209   26151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46651
	I0804 00:39:23.690676   26151 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:23.691124   26151 main.go:141] libmachine: Using API Version  1
	I0804 00:39:23.691141   26151 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:23.691496   26151 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:23.691698   26151 main.go:141] libmachine: (ha-230158-m03) Calling .DriverName
	I0804 00:39:23.691922   26151 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 00:39:23.691941   26151 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHHostname
	I0804 00:39:23.694503   26151 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:39:23.694916   26151 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
	I0804 00:39:23.694947   26151 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:39:23.695051   26151 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHPort
	I0804 00:39:23.695200   26151 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHKeyPath
	I0804 00:39:23.695348   26151 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHUsername
	I0804 00:39:23.695460   26151 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m03/id_rsa Username:docker}
	I0804 00:39:23.772490   26151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:39:23.787137   26151 kubeconfig.go:125] found "ha-230158" server: "https://192.168.39.254:8443"
	I0804 00:39:23.787165   26151 api_server.go:166] Checking apiserver status ...
	I0804 00:39:23.787193   26151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:39:23.807552   26151 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1971/cgroup
	W0804 00:39:23.818133   26151 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1971/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0804 00:39:23.818186   26151 ssh_runner.go:195] Run: ls
	I0804 00:39:23.822536   26151 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0804 00:39:23.828413   26151 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0804 00:39:23.828441   26151 status.go:422] ha-230158-m03 apiserver status = Running (err=<nil>)
	I0804 00:39:23.828453   26151 status.go:257] ha-230158-m03 status: &{Name:ha-230158-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 00:39:23.828472   26151 status.go:255] checking status of ha-230158-m04 ...
	I0804 00:39:23.828746   26151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:23.828780   26151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:23.846727   26151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44185
	I0804 00:39:23.847145   26151 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:23.847684   26151 main.go:141] libmachine: Using API Version  1
	I0804 00:39:23.847703   26151 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:23.847991   26151 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:23.848180   26151 main.go:141] libmachine: (ha-230158-m04) Calling .GetState
	I0804 00:39:23.849882   26151 status.go:330] ha-230158-m04 host status = "Running" (err=<nil>)
	I0804 00:39:23.849897   26151 host.go:66] Checking if "ha-230158-m04" exists ...
	I0804 00:39:23.850191   26151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:23.850244   26151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:23.864944   26151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36771
	I0804 00:39:23.865301   26151 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:23.865723   26151 main.go:141] libmachine: Using API Version  1
	I0804 00:39:23.865744   26151 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:23.866058   26151 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:23.866220   26151 main.go:141] libmachine: (ha-230158-m04) Calling .GetIP
	I0804 00:39:23.869054   26151 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
	I0804 00:39:23.869492   26151 main.go:141] libmachine: (ha-230158-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:b8:2e", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:36:42 +0000 UTC Type:0 Mac:52:54:00:eb:b8:2e Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-230158-m04 Clientid:01:52:54:00:eb:b8:2e}
	I0804 00:39:23.869520   26151 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined IP address 192.168.39.165 and MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
	I0804 00:39:23.869652   26151 host.go:66] Checking if "ha-230158-m04" exists ...
	I0804 00:39:23.869940   26151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:23.869991   26151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:23.884312   26151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44515
	I0804 00:39:23.884710   26151 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:23.885224   26151 main.go:141] libmachine: Using API Version  1
	I0804 00:39:23.885245   26151 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:23.885570   26151 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:23.885737   26151 main.go:141] libmachine: (ha-230158-m04) Calling .DriverName
	I0804 00:39:23.885914   26151 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 00:39:23.885933   26151 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHHostname
	I0804 00:39:23.888463   26151 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
	I0804 00:39:23.889010   26151 main.go:141] libmachine: (ha-230158-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:b8:2e", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:36:42 +0000 UTC Type:0 Mac:52:54:00:eb:b8:2e Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-230158-m04 Clientid:01:52:54:00:eb:b8:2e}
	I0804 00:39:23.889034   26151 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined IP address 192.168.39.165 and MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
	I0804 00:39:23.889195   26151 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHPort
	I0804 00:39:23.889349   26151 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHKeyPath
	I0804 00:39:23.889631   26151 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHUsername
	I0804 00:39:23.889816   26151 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m04/id_rsa Username:docker}
	I0804 00:39:23.965324   26151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:39:23.980527   26151 status.go:257] ha-230158-m04 status: &{Name:ha-230158-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-230158 status -v=7 --alsologtostderr: exit status 2 (774.189737ms)

                                                
                                                
-- stdout --
	ha-230158
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-230158-m02
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-230158-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-230158-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0804 00:39:27.138816   26250 out.go:291] Setting OutFile to fd 1 ...
	I0804 00:39:27.139074   26250 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:39:27.139084   26250 out.go:304] Setting ErrFile to fd 2...
	I0804 00:39:27.139090   26250 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:39:27.139309   26250 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-3947/.minikube/bin
	I0804 00:39:27.139476   26250 out.go:298] Setting JSON to false
	I0804 00:39:27.139504   26250 mustload.go:65] Loading cluster: ha-230158
	I0804 00:39:27.139610   26250 notify.go:220] Checking for updates...
	I0804 00:39:27.139880   26250 config.go:182] Loaded profile config "ha-230158": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0804 00:39:27.139895   26250 status.go:255] checking status of ha-230158 ...
	I0804 00:39:27.140258   26250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:27.140324   26250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:27.155658   26250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33493
	I0804 00:39:27.156078   26250 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:27.156589   26250 main.go:141] libmachine: Using API Version  1
	I0804 00:39:27.156610   26250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:27.156925   26250 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:27.157115   26250 main.go:141] libmachine: (ha-230158) Calling .GetState
	I0804 00:39:27.158641   26250 status.go:330] ha-230158 host status = "Running" (err=<nil>)
	I0804 00:39:27.158656   26250 host.go:66] Checking if "ha-230158" exists ...
	I0804 00:39:27.159034   26250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:27.159074   26250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:27.178782   26250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41447
	I0804 00:39:27.179180   26250 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:27.179748   26250 main.go:141] libmachine: Using API Version  1
	I0804 00:39:27.179779   26250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:27.180092   26250 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:27.180261   26250 main.go:141] libmachine: (ha-230158) Calling .GetIP
	I0804 00:39:27.183185   26250 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:39:27.183579   26250 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
	I0804 00:39:27.183617   26250 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:39:27.183737   26250 host.go:66] Checking if "ha-230158" exists ...
	I0804 00:39:27.184073   26250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:27.184109   26250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:27.199041   26250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45547
	I0804 00:39:27.199466   26250 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:27.199906   26250 main.go:141] libmachine: Using API Version  1
	I0804 00:39:27.199928   26250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:27.200221   26250 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:27.200439   26250 main.go:141] libmachine: (ha-230158) Calling .DriverName
	I0804 00:39:27.200638   26250 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 00:39:27.200657   26250 main.go:141] libmachine: (ha-230158) Calling .GetSSHHostname
	I0804 00:39:27.203464   26250 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:39:27.204066   26250 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
	I0804 00:39:27.204094   26250 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:39:27.204287   26250 main.go:141] libmachine: (ha-230158) Calling .GetSSHPort
	I0804 00:39:27.204452   26250 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
	I0804 00:39:27.204655   26250 main.go:141] libmachine: (ha-230158) Calling .GetSSHUsername
	I0804 00:39:27.204794   26250 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158/id_rsa Username:docker}
	I0804 00:39:27.286121   26250 ssh_runner.go:195] Run: systemctl --version
	I0804 00:39:27.292093   26250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:39:27.307094   26250 kubeconfig.go:125] found "ha-230158" server: "https://192.168.39.254:8443"
	I0804 00:39:27.307123   26250 api_server.go:166] Checking apiserver status ...
	I0804 00:39:27.307151   26250 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:39:27.321657   26250 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2004/cgroup
	W0804 00:39:27.332371   26250 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2004/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0804 00:39:27.332425   26250 ssh_runner.go:195] Run: ls
	I0804 00:39:27.337061   26250 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0804 00:39:27.344543   26250 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0804 00:39:27.344574   26250 status.go:422] ha-230158 apiserver status = Running (err=<nil>)
	I0804 00:39:27.344589   26250 status.go:257] ha-230158 status: &{Name:ha-230158 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 00:39:27.344612   26250 status.go:255] checking status of ha-230158-m02 ...
	I0804 00:39:27.345038   26250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:27.345080   26250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:27.360753   26250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37601
	I0804 00:39:27.361198   26250 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:27.361624   26250 main.go:141] libmachine: Using API Version  1
	I0804 00:39:27.361645   26250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:27.361919   26250 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:27.362075   26250 main.go:141] libmachine: (ha-230158-m02) Calling .GetState
	I0804 00:39:27.364082   26250 status.go:330] ha-230158-m02 host status = "Running" (err=<nil>)
	I0804 00:39:27.364112   26250 host.go:66] Checking if "ha-230158-m02" exists ...
	I0804 00:39:27.364469   26250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:27.364503   26250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:27.379236   26250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45535
	I0804 00:39:27.379598   26250 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:27.380047   26250 main.go:141] libmachine: Using API Version  1
	I0804 00:39:27.380060   26250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:27.380349   26250 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:27.380511   26250 main.go:141] libmachine: (ha-230158-m02) Calling .GetIP
	I0804 00:39:27.383356   26250 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:39:27.383786   26250 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
	I0804 00:39:27.383813   26250 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:39:27.383948   26250 host.go:66] Checking if "ha-230158-m02" exists ...
	I0804 00:39:27.384289   26250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:27.384324   26250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:27.401922   26250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41435
	I0804 00:39:27.402305   26250 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:27.402814   26250 main.go:141] libmachine: Using API Version  1
	I0804 00:39:27.402833   26250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:27.403108   26250 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:27.403303   26250 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
	I0804 00:39:27.403513   26250 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 00:39:27.403537   26250 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
	I0804 00:39:27.406709   26250 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:39:27.407084   26250 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
	I0804 00:39:27.407113   26250 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:39:27.407260   26250 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
	I0804 00:39:27.407409   26250 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
	I0804 00:39:27.407554   26250 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
	I0804 00:39:27.407679   26250 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/id_rsa Username:docker}
	I0804 00:39:27.493732   26250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:39:27.508180   26250 kubeconfig.go:125] found "ha-230158" server: "https://192.168.39.254:8443"
	I0804 00:39:27.508202   26250 api_server.go:166] Checking apiserver status ...
	I0804 00:39:27.508228   26250 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0804 00:39:27.520199   26250 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0804 00:39:27.520236   26250 status.go:422] ha-230158-m02 apiserver status = Stopped (err=<nil>)
	I0804 00:39:27.520247   26250 status.go:257] ha-230158-m02 status: &{Name:ha-230158-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 00:39:27.520266   26250 status.go:255] checking status of ha-230158-m03 ...
	I0804 00:39:27.520570   26250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:27.520602   26250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:27.535757   26250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43541
	I0804 00:39:27.536245   26250 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:27.536719   26250 main.go:141] libmachine: Using API Version  1
	I0804 00:39:27.536776   26250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:27.537096   26250 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:27.537264   26250 main.go:141] libmachine: (ha-230158-m03) Calling .GetState
	I0804 00:39:27.538903   26250 status.go:330] ha-230158-m03 host status = "Running" (err=<nil>)
	I0804 00:39:27.538922   26250 host.go:66] Checking if "ha-230158-m03" exists ...
	I0804 00:39:27.539283   26250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:27.539320   26250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:27.555353   26250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33237
	I0804 00:39:27.555754   26250 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:27.556149   26250 main.go:141] libmachine: Using API Version  1
	I0804 00:39:27.556175   26250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:27.556512   26250 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:27.556720   26250 main.go:141] libmachine: (ha-230158-m03) Calling .GetIP
	I0804 00:39:27.559212   26250 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:39:27.559581   26250 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
	I0804 00:39:27.559604   26250 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:39:27.559759   26250 host.go:66] Checking if "ha-230158-m03" exists ...
	I0804 00:39:27.560031   26250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:27.560067   26250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:27.575198   26250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43857
	I0804 00:39:27.575619   26250 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:27.576115   26250 main.go:141] libmachine: Using API Version  1
	I0804 00:39:27.576139   26250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:27.576449   26250 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:27.576677   26250 main.go:141] libmachine: (ha-230158-m03) Calling .DriverName
	I0804 00:39:27.576888   26250 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 00:39:27.576910   26250 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHHostname
	I0804 00:39:27.580158   26250 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:39:27.580533   26250 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
	I0804 00:39:27.580551   26250 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:39:27.580734   26250 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHPort
	I0804 00:39:27.580899   26250 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHKeyPath
	I0804 00:39:27.581052   26250 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHUsername
	I0804 00:39:27.581182   26250 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m03/id_rsa Username:docker}
	I0804 00:39:27.662758   26250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:39:27.680424   26250 kubeconfig.go:125] found "ha-230158" server: "https://192.168.39.254:8443"
	I0804 00:39:27.680449   26250 api_server.go:166] Checking apiserver status ...
	I0804 00:39:27.680486   26250 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:39:27.701083   26250 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1971/cgroup
	W0804 00:39:27.711340   26250 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1971/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0804 00:39:27.711400   26250 ssh_runner.go:195] Run: ls
	I0804 00:39:27.715965   26250 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0804 00:39:27.720188   26250 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0804 00:39:27.720210   26250 status.go:422] ha-230158-m03 apiserver status = Running (err=<nil>)
	I0804 00:39:27.720234   26250 status.go:257] ha-230158-m03 status: &{Name:ha-230158-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 00:39:27.720256   26250 status.go:255] checking status of ha-230158-m04 ...
	I0804 00:39:27.720550   26250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:27.720591   26250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:27.735347   26250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40527
	I0804 00:39:27.735771   26250 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:27.736220   26250 main.go:141] libmachine: Using API Version  1
	I0804 00:39:27.736240   26250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:27.736496   26250 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:27.736656   26250 main.go:141] libmachine: (ha-230158-m04) Calling .GetState
	I0804 00:39:27.738223   26250 status.go:330] ha-230158-m04 host status = "Running" (err=<nil>)
	I0804 00:39:27.738248   26250 host.go:66] Checking if "ha-230158-m04" exists ...
	I0804 00:39:27.738545   26250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:27.738581   26250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:27.752752   26250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35715
	I0804 00:39:27.753195   26250 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:27.753629   26250 main.go:141] libmachine: Using API Version  1
	I0804 00:39:27.753651   26250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:27.753956   26250 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:27.754148   26250 main.go:141] libmachine: (ha-230158-m04) Calling .GetIP
	I0804 00:39:27.757074   26250 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
	I0804 00:39:27.757521   26250 main.go:141] libmachine: (ha-230158-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:b8:2e", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:36:42 +0000 UTC Type:0 Mac:52:54:00:eb:b8:2e Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-230158-m04 Clientid:01:52:54:00:eb:b8:2e}
	I0804 00:39:27.757546   26250 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined IP address 192.168.39.165 and MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
	I0804 00:39:27.757690   26250 host.go:66] Checking if "ha-230158-m04" exists ...
	I0804 00:39:27.758001   26250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:27.758035   26250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:27.772705   26250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40129
	I0804 00:39:27.773044   26250 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:27.773507   26250 main.go:141] libmachine: Using API Version  1
	I0804 00:39:27.773529   26250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:27.773817   26250 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:27.773963   26250 main.go:141] libmachine: (ha-230158-m04) Calling .DriverName
	I0804 00:39:27.774148   26250 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 00:39:27.774164   26250 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHHostname
	I0804 00:39:27.776959   26250 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
	I0804 00:39:27.777383   26250 main.go:141] libmachine: (ha-230158-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:b8:2e", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:36:42 +0000 UTC Type:0 Mac:52:54:00:eb:b8:2e Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-230158-m04 Clientid:01:52:54:00:eb:b8:2e}
	I0804 00:39:27.777402   26250 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined IP address 192.168.39.165 and MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
	I0804 00:39:27.777584   26250 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHPort
	I0804 00:39:27.777766   26250 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHKeyPath
	I0804 00:39:27.777924   26250 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHUsername
	I0804 00:39:27.778039   26250 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m04/id_rsa Username:docker}
	I0804 00:39:27.856411   26250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:39:27.871513   26250 status.go:257] ha-230158-m04 status: &{Name:ha-230158-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-230158 status -v=7 --alsologtostderr: exit status 2 (775.325921ms)

                                                
                                                
-- stdout --
	ha-230158
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-230158-m02
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-230158-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-230158-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0804 00:39:32.683030   26351 out.go:291] Setting OutFile to fd 1 ...
	I0804 00:39:32.683245   26351 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:39:32.683252   26351 out.go:304] Setting ErrFile to fd 2...
	I0804 00:39:32.683256   26351 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:39:32.683410   26351 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-3947/.minikube/bin
	I0804 00:39:32.683567   26351 out.go:298] Setting JSON to false
	I0804 00:39:32.683590   26351 mustload.go:65] Loading cluster: ha-230158
	I0804 00:39:32.683671   26351 notify.go:220] Checking for updates...
	I0804 00:39:32.683915   26351 config.go:182] Loaded profile config "ha-230158": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0804 00:39:32.683928   26351 status.go:255] checking status of ha-230158 ...
	I0804 00:39:32.684310   26351 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:32.684366   26351 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:32.703035   26351 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41623
	I0804 00:39:32.703385   26351 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:32.704031   26351 main.go:141] libmachine: Using API Version  1
	I0804 00:39:32.704069   26351 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:32.704409   26351 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:32.704622   26351 main.go:141] libmachine: (ha-230158) Calling .GetState
	I0804 00:39:32.706255   26351 status.go:330] ha-230158 host status = "Running" (err=<nil>)
	I0804 00:39:32.706278   26351 host.go:66] Checking if "ha-230158" exists ...
	I0804 00:39:32.706544   26351 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:32.706588   26351 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:32.722941   26351 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37663
	I0804 00:39:32.723385   26351 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:32.723829   26351 main.go:141] libmachine: Using API Version  1
	I0804 00:39:32.723853   26351 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:32.724146   26351 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:32.724472   26351 main.go:141] libmachine: (ha-230158) Calling .GetIP
	I0804 00:39:32.727202   26351 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:39:32.727616   26351 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
	I0804 00:39:32.727655   26351 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:39:32.727708   26351 host.go:66] Checking if "ha-230158" exists ...
	I0804 00:39:32.727971   26351 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:32.728001   26351 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:32.742669   26351 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40081
	I0804 00:39:32.743072   26351 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:32.743525   26351 main.go:141] libmachine: Using API Version  1
	I0804 00:39:32.743550   26351 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:32.743836   26351 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:32.744085   26351 main.go:141] libmachine: (ha-230158) Calling .DriverName
	I0804 00:39:32.744373   26351 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 00:39:32.744396   26351 main.go:141] libmachine: (ha-230158) Calling .GetSSHHostname
	I0804 00:39:32.747286   26351 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:39:32.747678   26351 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
	I0804 00:39:32.747700   26351 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:39:32.747815   26351 main.go:141] libmachine: (ha-230158) Calling .GetSSHPort
	I0804 00:39:32.747978   26351 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
	I0804 00:39:32.748120   26351 main.go:141] libmachine: (ha-230158) Calling .GetSSHUsername
	I0804 00:39:32.748270   26351 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158/id_rsa Username:docker}
	I0804 00:39:32.829911   26351 ssh_runner.go:195] Run: systemctl --version
	I0804 00:39:32.836641   26351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:39:32.855108   26351 kubeconfig.go:125] found "ha-230158" server: "https://192.168.39.254:8443"
	I0804 00:39:32.855136   26351 api_server.go:166] Checking apiserver status ...
	I0804 00:39:32.855181   26351 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:39:32.874091   26351 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2004/cgroup
	W0804 00:39:32.887944   26351 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2004/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0804 00:39:32.887997   26351 ssh_runner.go:195] Run: ls
	I0804 00:39:32.893211   26351 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0804 00:39:32.898133   26351 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0804 00:39:32.898156   26351 status.go:422] ha-230158 apiserver status = Running (err=<nil>)
	I0804 00:39:32.898170   26351 status.go:257] ha-230158 status: &{Name:ha-230158 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 00:39:32.898200   26351 status.go:255] checking status of ha-230158-m02 ...
	I0804 00:39:32.898579   26351 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:32.898621   26351 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:32.914153   26351 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36417
	I0804 00:39:32.914630   26351 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:32.915126   26351 main.go:141] libmachine: Using API Version  1
	I0804 00:39:32.915142   26351 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:32.915449   26351 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:32.915690   26351 main.go:141] libmachine: (ha-230158-m02) Calling .GetState
	I0804 00:39:32.917269   26351 status.go:330] ha-230158-m02 host status = "Running" (err=<nil>)
	I0804 00:39:32.917295   26351 host.go:66] Checking if "ha-230158-m02" exists ...
	I0804 00:39:32.917696   26351 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:32.917736   26351 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:32.933117   26351 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34635
	I0804 00:39:32.933558   26351 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:32.934049   26351 main.go:141] libmachine: Using API Version  1
	I0804 00:39:32.934069   26351 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:32.934408   26351 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:32.934594   26351 main.go:141] libmachine: (ha-230158-m02) Calling .GetIP
	I0804 00:39:32.937847   26351 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:39:32.938398   26351 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
	I0804 00:39:32.938425   26351 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:39:32.938592   26351 host.go:66] Checking if "ha-230158-m02" exists ...
	I0804 00:39:32.938923   26351 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:32.938962   26351 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:32.953162   26351 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34071
	I0804 00:39:32.953564   26351 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:32.954044   26351 main.go:141] libmachine: Using API Version  1
	I0804 00:39:32.954074   26351 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:32.954380   26351 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:32.954527   26351 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
	I0804 00:39:32.954712   26351 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 00:39:32.954734   26351 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
	I0804 00:39:32.957106   26351 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:39:32.957524   26351 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
	I0804 00:39:32.957563   26351 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:39:32.957649   26351 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
	I0804 00:39:32.957800   26351 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
	I0804 00:39:32.957937   26351 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
	I0804 00:39:32.958051   26351 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/id_rsa Username:docker}
	I0804 00:39:33.041500   26351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:39:33.056749   26351 kubeconfig.go:125] found "ha-230158" server: "https://192.168.39.254:8443"
	I0804 00:39:33.056773   26351 api_server.go:166] Checking apiserver status ...
	I0804 00:39:33.056803   26351 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0804 00:39:33.069196   26351 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0804 00:39:33.069237   26351 status.go:422] ha-230158-m02 apiserver status = Stopped (err=<nil>)
	I0804 00:39:33.069249   26351 status.go:257] ha-230158-m02 status: &{Name:ha-230158-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 00:39:33.069268   26351 status.go:255] checking status of ha-230158-m03 ...
	I0804 00:39:33.069581   26351 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:33.069636   26351 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:33.085384   26351 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35871
	I0804 00:39:33.085770   26351 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:33.086359   26351 main.go:141] libmachine: Using API Version  1
	I0804 00:39:33.086381   26351 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:33.086699   26351 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:33.086880   26351 main.go:141] libmachine: (ha-230158-m03) Calling .GetState
	I0804 00:39:33.088340   26351 status.go:330] ha-230158-m03 host status = "Running" (err=<nil>)
	I0804 00:39:33.088355   26351 host.go:66] Checking if "ha-230158-m03" exists ...
	I0804 00:39:33.088649   26351 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:33.088698   26351 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:33.103742   26351 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40265
	I0804 00:39:33.104154   26351 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:33.104587   26351 main.go:141] libmachine: Using API Version  1
	I0804 00:39:33.104605   26351 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:33.104940   26351 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:33.105086   26351 main.go:141] libmachine: (ha-230158-m03) Calling .GetIP
	I0804 00:39:33.108149   26351 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:39:33.108641   26351 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
	I0804 00:39:33.108668   26351 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:39:33.108793   26351 host.go:66] Checking if "ha-230158-m03" exists ...
	I0804 00:39:33.109194   26351 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:33.109237   26351 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:33.124133   26351 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43113
	I0804 00:39:33.124482   26351 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:33.125070   26351 main.go:141] libmachine: Using API Version  1
	I0804 00:39:33.125086   26351 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:33.125388   26351 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:33.125586   26351 main.go:141] libmachine: (ha-230158-m03) Calling .DriverName
	I0804 00:39:33.125779   26351 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 00:39:33.125805   26351 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHHostname
	I0804 00:39:33.128457   26351 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:39:33.128836   26351 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
	I0804 00:39:33.128869   26351 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:39:33.129029   26351 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHPort
	I0804 00:39:33.129184   26351 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHKeyPath
	I0804 00:39:33.129354   26351 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHUsername
	I0804 00:39:33.129490   26351 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m03/id_rsa Username:docker}
	I0804 00:39:33.209193   26351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:39:33.225012   26351 kubeconfig.go:125] found "ha-230158" server: "https://192.168.39.254:8443"
	I0804 00:39:33.225041   26351 api_server.go:166] Checking apiserver status ...
	I0804 00:39:33.225079   26351 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:39:33.238913   26351 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1971/cgroup
	W0804 00:39:33.256898   26351 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1971/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0804 00:39:33.256937   26351 ssh_runner.go:195] Run: ls
	I0804 00:39:33.260983   26351 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0804 00:39:33.265639   26351 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0804 00:39:33.265658   26351 status.go:422] ha-230158-m03 apiserver status = Running (err=<nil>)
	I0804 00:39:33.265665   26351 status.go:257] ha-230158-m03 status: &{Name:ha-230158-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 00:39:33.265678   26351 status.go:255] checking status of ha-230158-m04 ...
	I0804 00:39:33.265941   26351 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:33.265971   26351 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:33.281282   26351 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35081
	I0804 00:39:33.281668   26351 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:33.282120   26351 main.go:141] libmachine: Using API Version  1
	I0804 00:39:33.282140   26351 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:33.282451   26351 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:33.282629   26351 main.go:141] libmachine: (ha-230158-m04) Calling .GetState
	I0804 00:39:33.284085   26351 status.go:330] ha-230158-m04 host status = "Running" (err=<nil>)
	I0804 00:39:33.284100   26351 host.go:66] Checking if "ha-230158-m04" exists ...
	I0804 00:39:33.284496   26351 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:33.284536   26351 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:33.298411   26351 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41741
	I0804 00:39:33.298745   26351 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:33.299170   26351 main.go:141] libmachine: Using API Version  1
	I0804 00:39:33.299192   26351 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:33.299527   26351 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:33.299708   26351 main.go:141] libmachine: (ha-230158-m04) Calling .GetIP
	I0804 00:39:33.302275   26351 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
	I0804 00:39:33.302837   26351 main.go:141] libmachine: (ha-230158-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:b8:2e", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:36:42 +0000 UTC Type:0 Mac:52:54:00:eb:b8:2e Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-230158-m04 Clientid:01:52:54:00:eb:b8:2e}
	I0804 00:39:33.302860   26351 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined IP address 192.168.39.165 and MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
	I0804 00:39:33.302868   26351 host.go:66] Checking if "ha-230158-m04" exists ...
	I0804 00:39:33.303146   26351 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:33.303177   26351 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:33.317537   26351 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41017
	I0804 00:39:33.317891   26351 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:33.318312   26351 main.go:141] libmachine: Using API Version  1
	I0804 00:39:33.318331   26351 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:33.318612   26351 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:33.318798   26351 main.go:141] libmachine: (ha-230158-m04) Calling .DriverName
	I0804 00:39:33.318960   26351 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 00:39:33.318978   26351 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHHostname
	I0804 00:39:33.321630   26351 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
	I0804 00:39:33.322071   26351 main.go:141] libmachine: (ha-230158-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:b8:2e", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:36:42 +0000 UTC Type:0 Mac:52:54:00:eb:b8:2e Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-230158-m04 Clientid:01:52:54:00:eb:b8:2e}
	I0804 00:39:33.322098   26351 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined IP address 192.168.39.165 and MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
	I0804 00:39:33.322273   26351 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHPort
	I0804 00:39:33.322435   26351 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHKeyPath
	I0804 00:39:33.322578   26351 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHUsername
	I0804 00:39:33.322722   26351 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m04/id_rsa Username:docker}
	I0804 00:39:33.401533   26351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:39:33.416856   26351 status.go:257] ha-230158-m04 status: &{Name:ha-230158-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-230158 status -v=7 --alsologtostderr: exit status 2 (760.485639ms)

                                                
                                                
-- stdout --
	ha-230158
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-230158-m02
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-230158-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-230158-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0804 00:39:42.965570   26467 out.go:291] Setting OutFile to fd 1 ...
	I0804 00:39:42.965793   26467 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:39:42.965802   26467 out.go:304] Setting ErrFile to fd 2...
	I0804 00:39:42.965806   26467 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:39:42.965983   26467 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-3947/.minikube/bin
	I0804 00:39:42.966138   26467 out.go:298] Setting JSON to false
	I0804 00:39:42.966161   26467 mustload.go:65] Loading cluster: ha-230158
	I0804 00:39:42.966265   26467 notify.go:220] Checking for updates...
	I0804 00:39:42.966548   26467 config.go:182] Loaded profile config "ha-230158": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0804 00:39:42.966564   26467 status.go:255] checking status of ha-230158 ...
	I0804 00:39:42.966934   26467 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:42.966987   26467 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:42.985623   26467 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44931
	I0804 00:39:42.985989   26467 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:42.986537   26467 main.go:141] libmachine: Using API Version  1
	I0804 00:39:42.986587   26467 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:42.987005   26467 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:42.987215   26467 main.go:141] libmachine: (ha-230158) Calling .GetState
	I0804 00:39:42.989087   26467 status.go:330] ha-230158 host status = "Running" (err=<nil>)
	I0804 00:39:42.989105   26467 host.go:66] Checking if "ha-230158" exists ...
	I0804 00:39:42.989545   26467 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:42.989591   26467 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:43.008685   26467 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33961
	I0804 00:39:43.009130   26467 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:43.009635   26467 main.go:141] libmachine: Using API Version  1
	I0804 00:39:43.009659   26467 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:43.010019   26467 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:43.010192   26467 main.go:141] libmachine: (ha-230158) Calling .GetIP
	I0804 00:39:43.013316   26467 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:39:43.013803   26467 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
	I0804 00:39:43.013830   26467 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:39:43.014016   26467 host.go:66] Checking if "ha-230158" exists ...
	I0804 00:39:43.014452   26467 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:43.014500   26467 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:43.030130   26467 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32775
	I0804 00:39:43.030562   26467 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:43.030955   26467 main.go:141] libmachine: Using API Version  1
	I0804 00:39:43.030976   26467 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:43.031311   26467 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:43.031495   26467 main.go:141] libmachine: (ha-230158) Calling .DriverName
	I0804 00:39:43.031665   26467 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 00:39:43.031690   26467 main.go:141] libmachine: (ha-230158) Calling .GetSSHHostname
	I0804 00:39:43.034592   26467 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:39:43.035081   26467 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
	I0804 00:39:43.035116   26467 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:39:43.035257   26467 main.go:141] libmachine: (ha-230158) Calling .GetSSHPort
	I0804 00:39:43.035429   26467 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
	I0804 00:39:43.035574   26467 main.go:141] libmachine: (ha-230158) Calling .GetSSHUsername
	I0804 00:39:43.035730   26467 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158/id_rsa Username:docker}
	I0804 00:39:43.118964   26467 ssh_runner.go:195] Run: systemctl --version
	I0804 00:39:43.125566   26467 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:39:43.140717   26467 kubeconfig.go:125] found "ha-230158" server: "https://192.168.39.254:8443"
	I0804 00:39:43.140753   26467 api_server.go:166] Checking apiserver status ...
	I0804 00:39:43.140789   26467 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:39:43.155035   26467 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2004/cgroup
	W0804 00:39:43.165877   26467 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2004/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0804 00:39:43.165912   26467 ssh_runner.go:195] Run: ls
	I0804 00:39:43.169933   26467 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0804 00:39:43.173992   26467 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0804 00:39:43.174009   26467 status.go:422] ha-230158 apiserver status = Running (err=<nil>)
	I0804 00:39:43.174018   26467 status.go:257] ha-230158 status: &{Name:ha-230158 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 00:39:43.174030   26467 status.go:255] checking status of ha-230158-m02 ...
	I0804 00:39:43.174337   26467 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:43.174376   26467 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:43.190386   26467 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44999
	I0804 00:39:43.190879   26467 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:43.191469   26467 main.go:141] libmachine: Using API Version  1
	I0804 00:39:43.191494   26467 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:43.191814   26467 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:43.192035   26467 main.go:141] libmachine: (ha-230158-m02) Calling .GetState
	I0804 00:39:43.193622   26467 status.go:330] ha-230158-m02 host status = "Running" (err=<nil>)
	I0804 00:39:43.193638   26467 host.go:66] Checking if "ha-230158-m02" exists ...
	I0804 00:39:43.193950   26467 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:43.193993   26467 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:43.208125   26467 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44131
	I0804 00:39:43.208570   26467 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:43.209091   26467 main.go:141] libmachine: Using API Version  1
	I0804 00:39:43.209110   26467 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:43.209436   26467 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:43.209612   26467 main.go:141] libmachine: (ha-230158-m02) Calling .GetIP
	I0804 00:39:43.212323   26467 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:39:43.212761   26467 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
	I0804 00:39:43.212785   26467 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:39:43.212956   26467 host.go:66] Checking if "ha-230158-m02" exists ...
	I0804 00:39:43.213291   26467 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:43.213324   26467 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:43.227103   26467 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36853
	I0804 00:39:43.227420   26467 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:43.227829   26467 main.go:141] libmachine: Using API Version  1
	I0804 00:39:43.227851   26467 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:43.228138   26467 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:43.228300   26467 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
	I0804 00:39:43.228462   26467 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 00:39:43.228482   26467 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
	I0804 00:39:43.231186   26467 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:39:43.231601   26467 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
	I0804 00:39:43.231626   26467 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:39:43.231760   26467 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
	I0804 00:39:43.231905   26467 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
	I0804 00:39:43.232035   26467 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
	I0804 00:39:43.232166   26467 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/id_rsa Username:docker}
	I0804 00:39:43.317446   26467 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:39:43.334560   26467 kubeconfig.go:125] found "ha-230158" server: "https://192.168.39.254:8443"
	I0804 00:39:43.334585   26467 api_server.go:166] Checking apiserver status ...
	I0804 00:39:43.334622   26467 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0804 00:39:43.348030   26467 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0804 00:39:43.348052   26467 status.go:422] ha-230158-m02 apiserver status = Stopped (err=<nil>)
	I0804 00:39:43.348062   26467 status.go:257] ha-230158-m02 status: &{Name:ha-230158-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 00:39:43.348078   26467 status.go:255] checking status of ha-230158-m03 ...
	I0804 00:39:43.348414   26467 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:43.348453   26467 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:43.362963   26467 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40387
	I0804 00:39:43.363327   26467 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:43.363817   26467 main.go:141] libmachine: Using API Version  1
	I0804 00:39:43.363841   26467 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:43.364175   26467 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:43.364406   26467 main.go:141] libmachine: (ha-230158-m03) Calling .GetState
	I0804 00:39:43.365954   26467 status.go:330] ha-230158-m03 host status = "Running" (err=<nil>)
	I0804 00:39:43.365967   26467 host.go:66] Checking if "ha-230158-m03" exists ...
	I0804 00:39:43.366368   26467 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:43.366408   26467 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:43.380961   26467 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33623
	I0804 00:39:43.381363   26467 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:43.381789   26467 main.go:141] libmachine: Using API Version  1
	I0804 00:39:43.381816   26467 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:43.382118   26467 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:43.382321   26467 main.go:141] libmachine: (ha-230158-m03) Calling .GetIP
	I0804 00:39:43.384941   26467 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:39:43.385396   26467 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
	I0804 00:39:43.385423   26467 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:39:43.385561   26467 host.go:66] Checking if "ha-230158-m03" exists ...
	I0804 00:39:43.385954   26467 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:43.385992   26467 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:43.401063   26467 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33521
	I0804 00:39:43.401415   26467 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:43.401786   26467 main.go:141] libmachine: Using API Version  1
	I0804 00:39:43.401803   26467 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:43.402155   26467 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:43.402378   26467 main.go:141] libmachine: (ha-230158-m03) Calling .DriverName
	I0804 00:39:43.402576   26467 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 00:39:43.402598   26467 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHHostname
	I0804 00:39:43.405416   26467 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:39:43.405770   26467 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
	I0804 00:39:43.405810   26467 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:39:43.405885   26467 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHPort
	I0804 00:39:43.406065   26467 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHKeyPath
	I0804 00:39:43.406207   26467 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHUsername
	I0804 00:39:43.406353   26467 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m03/id_rsa Username:docker}
	I0804 00:39:43.486420   26467 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:39:43.502427   26467 kubeconfig.go:125] found "ha-230158" server: "https://192.168.39.254:8443"
	I0804 00:39:43.502453   26467 api_server.go:166] Checking apiserver status ...
	I0804 00:39:43.502494   26467 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:39:43.515706   26467 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1971/cgroup
	W0804 00:39:43.524825   26467 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1971/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0804 00:39:43.524861   26467 ssh_runner.go:195] Run: ls
	I0804 00:39:43.529372   26467 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0804 00:39:43.533602   26467 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0804 00:39:43.533623   26467 status.go:422] ha-230158-m03 apiserver status = Running (err=<nil>)
	I0804 00:39:43.533633   26467 status.go:257] ha-230158-m03 status: &{Name:ha-230158-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 00:39:43.533654   26467 status.go:255] checking status of ha-230158-m04 ...
	I0804 00:39:43.533942   26467 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:43.533978   26467 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:43.549293   26467 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35063
	I0804 00:39:43.549660   26467 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:43.550071   26467 main.go:141] libmachine: Using API Version  1
	I0804 00:39:43.550086   26467 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:43.550453   26467 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:43.550671   26467 main.go:141] libmachine: (ha-230158-m04) Calling .GetState
	I0804 00:39:43.552228   26467 status.go:330] ha-230158-m04 host status = "Running" (err=<nil>)
	I0804 00:39:43.552243   26467 host.go:66] Checking if "ha-230158-m04" exists ...
	I0804 00:39:43.552540   26467 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:43.552575   26467 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:43.566723   26467 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33683
	I0804 00:39:43.567216   26467 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:43.567685   26467 main.go:141] libmachine: Using API Version  1
	I0804 00:39:43.567700   26467 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:43.567995   26467 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:43.568184   26467 main.go:141] libmachine: (ha-230158-m04) Calling .GetIP
	I0804 00:39:43.571273   26467 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
	I0804 00:39:43.571698   26467 main.go:141] libmachine: (ha-230158-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:b8:2e", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:36:42 +0000 UTC Type:0 Mac:52:54:00:eb:b8:2e Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-230158-m04 Clientid:01:52:54:00:eb:b8:2e}
	I0804 00:39:43.571725   26467 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined IP address 192.168.39.165 and MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
	I0804 00:39:43.571872   26467 host.go:66] Checking if "ha-230158-m04" exists ...
	I0804 00:39:43.572190   26467 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:39:43.572226   26467 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:39:43.586366   26467 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43635
	I0804 00:39:43.586817   26467 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:39:43.587337   26467 main.go:141] libmachine: Using API Version  1
	I0804 00:39:43.587360   26467 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:39:43.587625   26467 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:39:43.587861   26467 main.go:141] libmachine: (ha-230158-m04) Calling .DriverName
	I0804 00:39:43.588063   26467 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 00:39:43.588083   26467 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHHostname
	I0804 00:39:43.591237   26467 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
	I0804 00:39:43.591732   26467 main.go:141] libmachine: (ha-230158-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:b8:2e", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:36:42 +0000 UTC Type:0 Mac:52:54:00:eb:b8:2e Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-230158-m04 Clientid:01:52:54:00:eb:b8:2e}
	I0804 00:39:43.591760   26467 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined IP address 192.168.39.165 and MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
	I0804 00:39:43.591961   26467 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHPort
	I0804 00:39:43.592150   26467 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHKeyPath
	I0804 00:39:43.592422   26467 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHUsername
	I0804 00:39:43.592582   26467 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m04/id_rsa Username:docker}
	I0804 00:39:43.669969   26467 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:39:43.684037   26467 status.go:257] ha-230158-m04 status: &{Name:ha-230158-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-230158 status -v=7 --alsologtostderr: exit status 2 (774.497342ms)

                                                
                                                
-- stdout --
	ha-230158
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-230158-m02
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-230158-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-230158-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0804 00:40:00.705983   26628 out.go:291] Setting OutFile to fd 1 ...
	I0804 00:40:00.706100   26628 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:40:00.706107   26628 out.go:304] Setting ErrFile to fd 2...
	I0804 00:40:00.706111   26628 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:40:00.706305   26628 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-3947/.minikube/bin
	I0804 00:40:00.706532   26628 out.go:298] Setting JSON to false
	I0804 00:40:00.706555   26628 mustload.go:65] Loading cluster: ha-230158
	I0804 00:40:00.706595   26628 notify.go:220] Checking for updates...
	I0804 00:40:00.706903   26628 config.go:182] Loaded profile config "ha-230158": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0804 00:40:00.706915   26628 status.go:255] checking status of ha-230158 ...
	I0804 00:40:00.707342   26628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:40:00.707405   26628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:40:00.726813   26628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44235
	I0804 00:40:00.727241   26628 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:40:00.727920   26628 main.go:141] libmachine: Using API Version  1
	I0804 00:40:00.727946   26628 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:40:00.728300   26628 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:40:00.728534   26628 main.go:141] libmachine: (ha-230158) Calling .GetState
	I0804 00:40:00.730088   26628 status.go:330] ha-230158 host status = "Running" (err=<nil>)
	I0804 00:40:00.730108   26628 host.go:66] Checking if "ha-230158" exists ...
	I0804 00:40:00.730435   26628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:40:00.730476   26628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:40:00.744540   26628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42429
	I0804 00:40:00.744965   26628 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:40:00.745419   26628 main.go:141] libmachine: Using API Version  1
	I0804 00:40:00.745458   26628 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:40:00.745752   26628 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:40:00.745938   26628 main.go:141] libmachine: (ha-230158) Calling .GetIP
	I0804 00:40:00.748744   26628 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:40:00.749207   26628 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
	I0804 00:40:00.749226   26628 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:40:00.749364   26628 host.go:66] Checking if "ha-230158" exists ...
	I0804 00:40:00.749735   26628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:40:00.749773   26628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:40:00.764020   26628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35757
	I0804 00:40:00.764553   26628 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:40:00.765033   26628 main.go:141] libmachine: Using API Version  1
	I0804 00:40:00.765055   26628 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:40:00.765400   26628 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:40:00.765608   26628 main.go:141] libmachine: (ha-230158) Calling .DriverName
	I0804 00:40:00.765816   26628 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 00:40:00.765878   26628 main.go:141] libmachine: (ha-230158) Calling .GetSSHHostname
	I0804 00:40:00.768271   26628 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:40:00.768696   26628 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
	I0804 00:40:00.768722   26628 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:40:00.768877   26628 main.go:141] libmachine: (ha-230158) Calling .GetSSHPort
	I0804 00:40:00.769034   26628 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
	I0804 00:40:00.769207   26628 main.go:141] libmachine: (ha-230158) Calling .GetSSHUsername
	I0804 00:40:00.769342   26628 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158/id_rsa Username:docker}
	I0804 00:40:00.859720   26628 ssh_runner.go:195] Run: systemctl --version
	I0804 00:40:00.866803   26628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:40:00.885397   26628 kubeconfig.go:125] found "ha-230158" server: "https://192.168.39.254:8443"
	I0804 00:40:00.885424   26628 api_server.go:166] Checking apiserver status ...
	I0804 00:40:00.885464   26628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:40:00.899989   26628 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2004/cgroup
	W0804 00:40:00.909809   26628 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2004/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0804 00:40:00.909862   26628 ssh_runner.go:195] Run: ls
	I0804 00:40:00.914777   26628 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0804 00:40:00.920715   26628 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0804 00:40:00.920737   26628 status.go:422] ha-230158 apiserver status = Running (err=<nil>)
	I0804 00:40:00.920749   26628 status.go:257] ha-230158 status: &{Name:ha-230158 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 00:40:00.920767   26628 status.go:255] checking status of ha-230158-m02 ...
	I0804 00:40:00.921047   26628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:40:00.921089   26628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:40:00.938278   26628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46057
	I0804 00:40:00.938671   26628 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:40:00.939091   26628 main.go:141] libmachine: Using API Version  1
	I0804 00:40:00.939111   26628 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:40:00.939404   26628 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:40:00.939599   26628 main.go:141] libmachine: (ha-230158-m02) Calling .GetState
	I0804 00:40:00.941542   26628 status.go:330] ha-230158-m02 host status = "Running" (err=<nil>)
	I0804 00:40:00.941555   26628 host.go:66] Checking if "ha-230158-m02" exists ...
	I0804 00:40:00.941822   26628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:40:00.941855   26628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:40:00.956049   26628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33467
	I0804 00:40:00.956359   26628 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:40:00.956828   26628 main.go:141] libmachine: Using API Version  1
	I0804 00:40:00.956849   26628 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:40:00.957211   26628 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:40:00.957395   26628 main.go:141] libmachine: (ha-230158-m02) Calling .GetIP
	I0804 00:40:00.959848   26628 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:40:00.960225   26628 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
	I0804 00:40:00.960261   26628 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:40:00.960364   26628 host.go:66] Checking if "ha-230158-m02" exists ...
	I0804 00:40:00.960670   26628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:40:00.960700   26628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:40:00.975436   26628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35715
	I0804 00:40:00.975758   26628 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:40:00.976115   26628 main.go:141] libmachine: Using API Version  1
	I0804 00:40:00.976132   26628 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:40:00.976465   26628 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:40:00.976659   26628 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
	I0804 00:40:00.976838   26628 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 00:40:00.976857   26628 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
	I0804 00:40:00.979160   26628 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:40:00.979636   26628 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
	I0804 00:40:00.979660   26628 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:40:00.979784   26628 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
	I0804 00:40:00.979938   26628 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
	I0804 00:40:00.980132   26628 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
	I0804 00:40:00.980288   26628 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/id_rsa Username:docker}
	I0804 00:40:01.065427   26628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:40:01.081150   26628 kubeconfig.go:125] found "ha-230158" server: "https://192.168.39.254:8443"
	I0804 00:40:01.081171   26628 api_server.go:166] Checking apiserver status ...
	I0804 00:40:01.081200   26628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0804 00:40:01.093834   26628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0804 00:40:01.093852   26628 status.go:422] ha-230158-m02 apiserver status = Stopped (err=<nil>)
	I0804 00:40:01.093860   26628 status.go:257] ha-230158-m02 status: &{Name:ha-230158-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 00:40:01.093875   26628 status.go:255] checking status of ha-230158-m03 ...
	I0804 00:40:01.094218   26628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:40:01.094279   26628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:40:01.109668   26628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36403
	I0804 00:40:01.110093   26628 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:40:01.110588   26628 main.go:141] libmachine: Using API Version  1
	I0804 00:40:01.110610   26628 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:40:01.110975   26628 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:40:01.111157   26628 main.go:141] libmachine: (ha-230158-m03) Calling .GetState
	I0804 00:40:01.112853   26628 status.go:330] ha-230158-m03 host status = "Running" (err=<nil>)
	I0804 00:40:01.112872   26628 host.go:66] Checking if "ha-230158-m03" exists ...
	I0804 00:40:01.113236   26628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:40:01.113280   26628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:40:01.128092   26628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43451
	I0804 00:40:01.128444   26628 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:40:01.128881   26628 main.go:141] libmachine: Using API Version  1
	I0804 00:40:01.128905   26628 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:40:01.129186   26628 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:40:01.129389   26628 main.go:141] libmachine: (ha-230158-m03) Calling .GetIP
	I0804 00:40:01.132514   26628 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:40:01.133096   26628 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
	I0804 00:40:01.133136   26628 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:40:01.133456   26628 host.go:66] Checking if "ha-230158-m03" exists ...
	I0804 00:40:01.133769   26628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:40:01.133809   26628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:40:01.149480   26628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41797
	I0804 00:40:01.149905   26628 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:40:01.150447   26628 main.go:141] libmachine: Using API Version  1
	I0804 00:40:01.150472   26628 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:40:01.150749   26628 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:40:01.150970   26628 main.go:141] libmachine: (ha-230158-m03) Calling .DriverName
	I0804 00:40:01.151153   26628 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 00:40:01.151174   26628 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHHostname
	I0804 00:40:01.154041   26628 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:40:01.154547   26628 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
	I0804 00:40:01.154583   26628 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:40:01.154709   26628 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHPort
	I0804 00:40:01.154897   26628 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHKeyPath
	I0804 00:40:01.155063   26628 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHUsername
	I0804 00:40:01.155211   26628 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m03/id_rsa Username:docker}
	I0804 00:40:01.234354   26628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:40:01.252060   26628 kubeconfig.go:125] found "ha-230158" server: "https://192.168.39.254:8443"
	I0804 00:40:01.252092   26628 api_server.go:166] Checking apiserver status ...
	I0804 00:40:01.252132   26628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:40:01.267481   26628 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1971/cgroup
	W0804 00:40:01.276924   26628 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1971/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0804 00:40:01.276966   26628 ssh_runner.go:195] Run: ls
	I0804 00:40:01.281584   26628 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0804 00:40:01.285812   26628 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0804 00:40:01.285836   26628 status.go:422] ha-230158-m03 apiserver status = Running (err=<nil>)
	I0804 00:40:01.285847   26628 status.go:257] ha-230158-m03 status: &{Name:ha-230158-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 00:40:01.285865   26628 status.go:255] checking status of ha-230158-m04 ...
	I0804 00:40:01.286148   26628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:40:01.286182   26628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:40:01.301131   26628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45627
	I0804 00:40:01.301565   26628 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:40:01.302003   26628 main.go:141] libmachine: Using API Version  1
	I0804 00:40:01.302022   26628 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:40:01.302342   26628 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:40:01.302535   26628 main.go:141] libmachine: (ha-230158-m04) Calling .GetState
	I0804 00:40:01.303874   26628 status.go:330] ha-230158-m04 host status = "Running" (err=<nil>)
	I0804 00:40:01.303895   26628 host.go:66] Checking if "ha-230158-m04" exists ...
	I0804 00:40:01.304211   26628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:40:01.304246   26628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:40:01.318453   26628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38739
	I0804 00:40:01.318796   26628 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:40:01.319251   26628 main.go:141] libmachine: Using API Version  1
	I0804 00:40:01.319270   26628 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:40:01.319562   26628 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:40:01.319764   26628 main.go:141] libmachine: (ha-230158-m04) Calling .GetIP
	I0804 00:40:01.322395   26628 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
	I0804 00:40:01.322776   26628 main.go:141] libmachine: (ha-230158-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:b8:2e", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:36:42 +0000 UTC Type:0 Mac:52:54:00:eb:b8:2e Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-230158-m04 Clientid:01:52:54:00:eb:b8:2e}
	I0804 00:40:01.322811   26628 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined IP address 192.168.39.165 and MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
	I0804 00:40:01.322944   26628 host.go:66] Checking if "ha-230158-m04" exists ...
	I0804 00:40:01.323336   26628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:40:01.323402   26628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:40:01.337881   26628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33047
	I0804 00:40:01.338340   26628 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:40:01.338784   26628 main.go:141] libmachine: Using API Version  1
	I0804 00:40:01.338806   26628 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:40:01.339157   26628 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:40:01.339363   26628 main.go:141] libmachine: (ha-230158-m04) Calling .DriverName
	I0804 00:40:01.339554   26628 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 00:40:01.339583   26628 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHHostname
	I0804 00:40:01.342103   26628 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
	I0804 00:40:01.342523   26628 main.go:141] libmachine: (ha-230158-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:b8:2e", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:36:42 +0000 UTC Type:0 Mac:52:54:00:eb:b8:2e Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-230158-m04 Clientid:01:52:54:00:eb:b8:2e}
	I0804 00:40:01.342560   26628 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined IP address 192.168.39.165 and MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
	I0804 00:40:01.342715   26628 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHPort
	I0804 00:40:01.342891   26628 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHKeyPath
	I0804 00:40:01.343046   26628 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHUsername
	I0804 00:40:01.343227   26628 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m04/id_rsa Username:docker}
	I0804 00:40:01.421977   26628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:40:01.436806   26628 status.go:257] ha-230158-m04 status: &{Name:ha-230158-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
E0804 00:40:06.990215   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/addons-044946/client.crt: no such file or directory
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-230158 status -v=7 --alsologtostderr: exit status 2 (762.939021ms)

                                                
                                                
-- stdout --
	ha-230158
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-230158-m02
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-230158-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-230158-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0804 00:40:13.625178   26761 out.go:291] Setting OutFile to fd 1 ...
	I0804 00:40:13.625402   26761 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:40:13.625410   26761 out.go:304] Setting ErrFile to fd 2...
	I0804 00:40:13.625414   26761 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:40:13.625563   26761 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-3947/.minikube/bin
	I0804 00:40:13.625702   26761 out.go:298] Setting JSON to false
	I0804 00:40:13.625723   26761 mustload.go:65] Loading cluster: ha-230158
	I0804 00:40:13.625757   26761 notify.go:220] Checking for updates...
	I0804 00:40:13.626138   26761 config.go:182] Loaded profile config "ha-230158": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0804 00:40:13.626156   26761 status.go:255] checking status of ha-230158 ...
	I0804 00:40:13.626581   26761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:40:13.626639   26761 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:40:13.647711   26761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34223
	I0804 00:40:13.648129   26761 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:40:13.648713   26761 main.go:141] libmachine: Using API Version  1
	I0804 00:40:13.648734   26761 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:40:13.649163   26761 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:40:13.649485   26761 main.go:141] libmachine: (ha-230158) Calling .GetState
	I0804 00:40:13.651207   26761 status.go:330] ha-230158 host status = "Running" (err=<nil>)
	I0804 00:40:13.651221   26761 host.go:66] Checking if "ha-230158" exists ...
	I0804 00:40:13.651538   26761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:40:13.651581   26761 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:40:13.665791   26761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44099
	I0804 00:40:13.666179   26761 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:40:13.666689   26761 main.go:141] libmachine: Using API Version  1
	I0804 00:40:13.666711   26761 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:40:13.666996   26761 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:40:13.667185   26761 main.go:141] libmachine: (ha-230158) Calling .GetIP
	I0804 00:40:13.670065   26761 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:40:13.670539   26761 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
	I0804 00:40:13.670563   26761 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:40:13.670679   26761 host.go:66] Checking if "ha-230158" exists ...
	I0804 00:40:13.670930   26761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:40:13.670971   26761 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:40:13.685230   26761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43215
	I0804 00:40:13.685550   26761 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:40:13.685980   26761 main.go:141] libmachine: Using API Version  1
	I0804 00:40:13.685998   26761 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:40:13.686311   26761 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:40:13.686504   26761 main.go:141] libmachine: (ha-230158) Calling .DriverName
	I0804 00:40:13.686677   26761 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 00:40:13.686695   26761 main.go:141] libmachine: (ha-230158) Calling .GetSSHHostname
	I0804 00:40:13.689125   26761 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:40:13.689494   26761 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
	I0804 00:40:13.689513   26761 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:40:13.689645   26761 main.go:141] libmachine: (ha-230158) Calling .GetSSHPort
	I0804 00:40:13.689835   26761 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
	I0804 00:40:13.690037   26761 main.go:141] libmachine: (ha-230158) Calling .GetSSHUsername
	I0804 00:40:13.690196   26761 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158/id_rsa Username:docker}
	I0804 00:40:13.773913   26761 ssh_runner.go:195] Run: systemctl --version
	I0804 00:40:13.781062   26761 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:40:13.797485   26761 kubeconfig.go:125] found "ha-230158" server: "https://192.168.39.254:8443"
	I0804 00:40:13.797513   26761 api_server.go:166] Checking apiserver status ...
	I0804 00:40:13.797545   26761 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:40:13.812929   26761 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2004/cgroup
	W0804 00:40:13.823126   26761 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2004/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0804 00:40:13.823184   26761 ssh_runner.go:195] Run: ls
	I0804 00:40:13.827659   26761 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0804 00:40:13.833525   26761 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0804 00:40:13.833544   26761 status.go:422] ha-230158 apiserver status = Running (err=<nil>)
	I0804 00:40:13.833552   26761 status.go:257] ha-230158 status: &{Name:ha-230158 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 00:40:13.833567   26761 status.go:255] checking status of ha-230158-m02 ...
	I0804 00:40:13.833867   26761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:40:13.833905   26761 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:40:13.848575   26761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39057
	I0804 00:40:13.848917   26761 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:40:13.849357   26761 main.go:141] libmachine: Using API Version  1
	I0804 00:40:13.849378   26761 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:40:13.849675   26761 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:40:13.849861   26761 main.go:141] libmachine: (ha-230158-m02) Calling .GetState
	I0804 00:40:13.851329   26761 status.go:330] ha-230158-m02 host status = "Running" (err=<nil>)
	I0804 00:40:13.851346   26761 host.go:66] Checking if "ha-230158-m02" exists ...
	I0804 00:40:13.851663   26761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:40:13.851697   26761 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:40:13.866462   26761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45723
	I0804 00:40:13.866850   26761 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:40:13.867315   26761 main.go:141] libmachine: Using API Version  1
	I0804 00:40:13.867340   26761 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:40:13.867600   26761 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:40:13.867782   26761 main.go:141] libmachine: (ha-230158-m02) Calling .GetIP
	I0804 00:40:13.870119   26761 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:40:13.870552   26761 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
	I0804 00:40:13.870579   26761 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:40:13.870801   26761 host.go:66] Checking if "ha-230158-m02" exists ...
	I0804 00:40:13.871116   26761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:40:13.871149   26761 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:40:13.885387   26761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33409
	I0804 00:40:13.885707   26761 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:40:13.886151   26761 main.go:141] libmachine: Using API Version  1
	I0804 00:40:13.886171   26761 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:40:13.886538   26761 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:40:13.886819   26761 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
	I0804 00:40:13.887052   26761 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 00:40:13.887073   26761 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
	I0804 00:40:13.889724   26761 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:40:13.890148   26761 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
	I0804 00:40:13.890170   26761 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:40:13.890343   26761 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
	I0804 00:40:13.890541   26761 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
	I0804 00:40:13.890797   26761 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
	I0804 00:40:13.890985   26761 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/id_rsa Username:docker}
	I0804 00:40:13.973179   26761 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:40:13.988079   26761 kubeconfig.go:125] found "ha-230158" server: "https://192.168.39.254:8443"
	I0804 00:40:13.988102   26761 api_server.go:166] Checking apiserver status ...
	I0804 00:40:13.988132   26761 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0804 00:40:14.000664   26761 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0804 00:40:14.000688   26761 status.go:422] ha-230158-m02 apiserver status = Stopped (err=<nil>)
	I0804 00:40:14.000696   26761 status.go:257] ha-230158-m02 status: &{Name:ha-230158-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 00:40:14.000709   26761 status.go:255] checking status of ha-230158-m03 ...
	I0804 00:40:14.001144   26761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:40:14.001186   26761 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:40:14.017077   26761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41765
	I0804 00:40:14.017531   26761 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:40:14.017967   26761 main.go:141] libmachine: Using API Version  1
	I0804 00:40:14.017986   26761 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:40:14.018313   26761 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:40:14.018479   26761 main.go:141] libmachine: (ha-230158-m03) Calling .GetState
	I0804 00:40:14.020146   26761 status.go:330] ha-230158-m03 host status = "Running" (err=<nil>)
	I0804 00:40:14.020161   26761 host.go:66] Checking if "ha-230158-m03" exists ...
	I0804 00:40:14.020444   26761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:40:14.020495   26761 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:40:14.035027   26761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35089
	I0804 00:40:14.035357   26761 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:40:14.035793   26761 main.go:141] libmachine: Using API Version  1
	I0804 00:40:14.035813   26761 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:40:14.036130   26761 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:40:14.036283   26761 main.go:141] libmachine: (ha-230158-m03) Calling .GetIP
	I0804 00:40:14.039002   26761 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:40:14.039526   26761 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
	I0804 00:40:14.039571   26761 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:40:14.039694   26761 host.go:66] Checking if "ha-230158-m03" exists ...
	I0804 00:40:14.040037   26761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:40:14.040069   26761 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:40:14.055218   26761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41869
	I0804 00:40:14.055638   26761 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:40:14.056054   26761 main.go:141] libmachine: Using API Version  1
	I0804 00:40:14.056072   26761 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:40:14.056374   26761 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:40:14.056527   26761 main.go:141] libmachine: (ha-230158-m03) Calling .DriverName
	I0804 00:40:14.056712   26761 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 00:40:14.056729   26761 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHHostname
	I0804 00:40:14.059034   26761 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:40:14.059422   26761 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
	I0804 00:40:14.059458   26761 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:40:14.059601   26761 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHPort
	I0804 00:40:14.059749   26761 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHKeyPath
	I0804 00:40:14.059918   26761 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHUsername
	I0804 00:40:14.060057   26761 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m03/id_rsa Username:docker}
	I0804 00:40:14.138356   26761 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:40:14.156650   26761 kubeconfig.go:125] found "ha-230158" server: "https://192.168.39.254:8443"
	I0804 00:40:14.156675   26761 api_server.go:166] Checking apiserver status ...
	I0804 00:40:14.156703   26761 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:40:14.173050   26761 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1971/cgroup
	W0804 00:40:14.182584   26761 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1971/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0804 00:40:14.182640   26761 ssh_runner.go:195] Run: ls
	I0804 00:40:14.187124   26761 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0804 00:40:14.196154   26761 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0804 00:40:14.196188   26761 status.go:422] ha-230158-m03 apiserver status = Running (err=<nil>)
	I0804 00:40:14.196200   26761 status.go:257] ha-230158-m03 status: &{Name:ha-230158-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 00:40:14.196226   26761 status.go:255] checking status of ha-230158-m04 ...
	I0804 00:40:14.196556   26761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:40:14.196593   26761 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:40:14.211442   26761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37811
	I0804 00:40:14.211868   26761 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:40:14.212341   26761 main.go:141] libmachine: Using API Version  1
	I0804 00:40:14.212369   26761 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:40:14.212697   26761 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:40:14.212874   26761 main.go:141] libmachine: (ha-230158-m04) Calling .GetState
	I0804 00:40:14.214455   26761 status.go:330] ha-230158-m04 host status = "Running" (err=<nil>)
	I0804 00:40:14.214486   26761 host.go:66] Checking if "ha-230158-m04" exists ...
	I0804 00:40:14.214796   26761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:40:14.214835   26761 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:40:14.229872   26761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46881
	I0804 00:40:14.230263   26761 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:40:14.230721   26761 main.go:141] libmachine: Using API Version  1
	I0804 00:40:14.230740   26761 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:40:14.231029   26761 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:40:14.231192   26761 main.go:141] libmachine: (ha-230158-m04) Calling .GetIP
	I0804 00:40:14.233848   26761 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
	I0804 00:40:14.234415   26761 main.go:141] libmachine: (ha-230158-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:b8:2e", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:36:42 +0000 UTC Type:0 Mac:52:54:00:eb:b8:2e Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-230158-m04 Clientid:01:52:54:00:eb:b8:2e}
	I0804 00:40:14.234465   26761 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined IP address 192.168.39.165 and MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
	I0804 00:40:14.234622   26761 host.go:66] Checking if "ha-230158-m04" exists ...
	I0804 00:40:14.234957   26761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:40:14.234990   26761 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:40:14.251015   26761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44805
	I0804 00:40:14.251440   26761 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:40:14.251845   26761 main.go:141] libmachine: Using API Version  1
	I0804 00:40:14.251863   26761 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:40:14.252161   26761 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:40:14.252336   26761 main.go:141] libmachine: (ha-230158-m04) Calling .DriverName
	I0804 00:40:14.252511   26761 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 00:40:14.252528   26761 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHHostname
	I0804 00:40:14.254949   26761 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
	I0804 00:40:14.255269   26761 main.go:141] libmachine: (ha-230158-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:b8:2e", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:36:42 +0000 UTC Type:0 Mac:52:54:00:eb:b8:2e Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-230158-m04 Clientid:01:52:54:00:eb:b8:2e}
	I0804 00:40:14.255286   26761 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined IP address 192.168.39.165 and MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
	I0804 00:40:14.255427   26761 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHPort
	I0804 00:40:14.255582   26761 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHKeyPath
	I0804 00:40:14.255727   26761 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHUsername
	I0804 00:40:14.255855   26761 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m04/id_rsa Username:docker}
	I0804 00:40:14.333822   26761 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:40:14.348585   26761 status.go:257] ha-230158-m04 status: &{Name:ha-230158-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-230158 status -v=7 --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-230158 -n ha-230158
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-230158 logs -n 25: (1.133054616s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-230158 ssh -n                                                                | ha-230158 | jenkins | v1.33.1 | 04 Aug 24 00:37 UTC | 04 Aug 24 00:37 UTC |
	|         | ha-230158-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-230158 cp ha-230158-m03:/home/docker/cp-test.txt                             | ha-230158 | jenkins | v1.33.1 | 04 Aug 24 00:37 UTC | 04 Aug 24 00:37 UTC |
	|         | ha-230158:/home/docker/cp-test_ha-230158-m03_ha-230158.txt                      |           |         |         |                     |                     |
	| ssh     | ha-230158 ssh -n                                                                | ha-230158 | jenkins | v1.33.1 | 04 Aug 24 00:37 UTC | 04 Aug 24 00:37 UTC |
	|         | ha-230158-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-230158 ssh -n ha-230158 sudo cat                                             | ha-230158 | jenkins | v1.33.1 | 04 Aug 24 00:37 UTC | 04 Aug 24 00:37 UTC |
	|         | /home/docker/cp-test_ha-230158-m03_ha-230158.txt                                |           |         |         |                     |                     |
	| cp      | ha-230158 cp ha-230158-m03:/home/docker/cp-test.txt                             | ha-230158 | jenkins | v1.33.1 | 04 Aug 24 00:37 UTC | 04 Aug 24 00:37 UTC |
	|         | ha-230158-m02:/home/docker/cp-test_ha-230158-m03_ha-230158-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-230158 ssh -n                                                                | ha-230158 | jenkins | v1.33.1 | 04 Aug 24 00:37 UTC | 04 Aug 24 00:37 UTC |
	|         | ha-230158-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-230158 ssh -n ha-230158-m02 sudo cat                                         | ha-230158 | jenkins | v1.33.1 | 04 Aug 24 00:37 UTC | 04 Aug 24 00:37 UTC |
	|         | /home/docker/cp-test_ha-230158-m03_ha-230158-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-230158 cp ha-230158-m03:/home/docker/cp-test.txt                             | ha-230158 | jenkins | v1.33.1 | 04 Aug 24 00:37 UTC | 04 Aug 24 00:37 UTC |
	|         | ha-230158-m04:/home/docker/cp-test_ha-230158-m03_ha-230158-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-230158 ssh -n                                                                | ha-230158 | jenkins | v1.33.1 | 04 Aug 24 00:37 UTC | 04 Aug 24 00:37 UTC |
	|         | ha-230158-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-230158 ssh -n ha-230158-m04 sudo cat                                         | ha-230158 | jenkins | v1.33.1 | 04 Aug 24 00:37 UTC | 04 Aug 24 00:37 UTC |
	|         | /home/docker/cp-test_ha-230158-m03_ha-230158-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-230158 cp testdata/cp-test.txt                                               | ha-230158 | jenkins | v1.33.1 | 04 Aug 24 00:37 UTC | 04 Aug 24 00:37 UTC |
	|         | ha-230158-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-230158 ssh -n                                                                | ha-230158 | jenkins | v1.33.1 | 04 Aug 24 00:37 UTC | 04 Aug 24 00:37 UTC |
	|         | ha-230158-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-230158 cp ha-230158-m04:/home/docker/cp-test.txt                             | ha-230158 | jenkins | v1.33.1 | 04 Aug 24 00:37 UTC | 04 Aug 24 00:37 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile571222237/001/cp-test_ha-230158-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-230158 ssh -n                                                                | ha-230158 | jenkins | v1.33.1 | 04 Aug 24 00:37 UTC | 04 Aug 24 00:37 UTC |
	|         | ha-230158-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-230158 cp ha-230158-m04:/home/docker/cp-test.txt                             | ha-230158 | jenkins | v1.33.1 | 04 Aug 24 00:37 UTC | 04 Aug 24 00:37 UTC |
	|         | ha-230158:/home/docker/cp-test_ha-230158-m04_ha-230158.txt                      |           |         |         |                     |                     |
	| ssh     | ha-230158 ssh -n                                                                | ha-230158 | jenkins | v1.33.1 | 04 Aug 24 00:37 UTC | 04 Aug 24 00:37 UTC |
	|         | ha-230158-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-230158 ssh -n ha-230158 sudo cat                                             | ha-230158 | jenkins | v1.33.1 | 04 Aug 24 00:37 UTC | 04 Aug 24 00:37 UTC |
	|         | /home/docker/cp-test_ha-230158-m04_ha-230158.txt                                |           |         |         |                     |                     |
	| cp      | ha-230158 cp ha-230158-m04:/home/docker/cp-test.txt                             | ha-230158 | jenkins | v1.33.1 | 04 Aug 24 00:37 UTC | 04 Aug 24 00:37 UTC |
	|         | ha-230158-m02:/home/docker/cp-test_ha-230158-m04_ha-230158-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-230158 ssh -n                                                                | ha-230158 | jenkins | v1.33.1 | 04 Aug 24 00:37 UTC | 04 Aug 24 00:37 UTC |
	|         | ha-230158-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-230158 ssh -n ha-230158-m02 sudo cat                                         | ha-230158 | jenkins | v1.33.1 | 04 Aug 24 00:37 UTC | 04 Aug 24 00:37 UTC |
	|         | /home/docker/cp-test_ha-230158-m04_ha-230158-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-230158 cp ha-230158-m04:/home/docker/cp-test.txt                             | ha-230158 | jenkins | v1.33.1 | 04 Aug 24 00:37 UTC | 04 Aug 24 00:37 UTC |
	|         | ha-230158-m03:/home/docker/cp-test_ha-230158-m04_ha-230158-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-230158 ssh -n                                                                | ha-230158 | jenkins | v1.33.1 | 04 Aug 24 00:37 UTC | 04 Aug 24 00:37 UTC |
	|         | ha-230158-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-230158 ssh -n ha-230158-m03 sudo cat                                         | ha-230158 | jenkins | v1.33.1 | 04 Aug 24 00:37 UTC | 04 Aug 24 00:37 UTC |
	|         | /home/docker/cp-test_ha-230158-m04_ha-230158-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-230158 node stop m02 -v=7                                                    | ha-230158 | jenkins | v1.33.1 | 04 Aug 24 00:37 UTC | 04 Aug 24 00:37 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-230158 node start m02 -v=7                                                   | ha-230158 | jenkins | v1.33.1 | 04 Aug 24 00:37 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/04 00:32:30
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0804 00:32:30.855673   21140 out.go:291] Setting OutFile to fd 1 ...
	I0804 00:32:30.855914   21140 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:32:30.855922   21140 out.go:304] Setting ErrFile to fd 2...
	I0804 00:32:30.855926   21140 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:32:30.856094   21140 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-3947/.minikube/bin
	I0804 00:32:30.856624   21140 out.go:298] Setting JSON to false
	I0804 00:32:30.857452   21140 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":901,"bootTime":1722730650,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 00:32:30.857503   21140 start.go:139] virtualization: kvm guest
	I0804 00:32:30.859407   21140 out.go:177] * [ha-230158] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0804 00:32:30.860777   21140 notify.go:220] Checking for updates...
	I0804 00:32:30.860790   21140 out.go:177]   - MINIKUBE_LOCATION=19364
	I0804 00:32:30.862263   21140 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 00:32:30.863516   21140 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19364-3947/kubeconfig
	I0804 00:32:30.864678   21140 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-3947/.minikube
	I0804 00:32:30.865850   21140 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 00:32:30.867244   21140 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 00:32:30.868638   21140 driver.go:392] Setting default libvirt URI to qemu:///system
	I0804 00:32:30.902700   21140 out.go:177] * Using the kvm2 driver based on user configuration
	I0804 00:32:30.903896   21140 start.go:297] selected driver: kvm2
	I0804 00:32:30.903910   21140 start.go:901] validating driver "kvm2" against <nil>
	I0804 00:32:30.903929   21140 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 00:32:30.904664   21140 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 00:32:30.904725   21140 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19364-3947/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0804 00:32:30.920763   21140 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0804 00:32:30.920824   21140 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0804 00:32:30.921056   21140 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0804 00:32:30.921140   21140 cni.go:84] Creating CNI manager for ""
	I0804 00:32:30.921155   21140 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0804 00:32:30.921162   21140 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0804 00:32:30.921247   21140 start.go:340] cluster config:
	{Name:ha-230158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-230158 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:32:30.921381   21140 iso.go:125] acquiring lock: {Name:mk61d89caa127145c801001852615ed27862a97f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 00:32:30.923111   21140 out.go:177] * Starting "ha-230158" primary control-plane node in "ha-230158" cluster
	I0804 00:32:30.924560   21140 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0804 00:32:30.924602   21140 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19364-3947/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0804 00:32:30.924614   21140 cache.go:56] Caching tarball of preloaded images
	I0804 00:32:30.925772   21140 preload.go:172] Found /home/jenkins/minikube-integration/19364-3947/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0804 00:32:30.925794   21140 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0804 00:32:30.926310   21140 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/config.json ...
	I0804 00:32:30.926344   21140 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/config.json: {Name:mk27b5858edb4d8a82fada41a2f7df8a81efcd09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:32:30.926532   21140 start.go:360] acquireMachinesLock for ha-230158: {Name:mk3c8b650475b5a29be5f1e49e0345d4de7c1632 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0804 00:32:30.926583   21140 start.go:364] duration metric: took 25.422µs to acquireMachinesLock for "ha-230158"
	I0804 00:32:30.926607   21140 start.go:93] Provisioning new machine with config: &{Name:ha-230158 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-230158 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0804 00:32:30.926689   21140 start.go:125] createHost starting for "" (driver="kvm2")
	I0804 00:32:30.928257   21140 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0804 00:32:30.928406   21140 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:32:30.928460   21140 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:32:30.942414   21140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41165
	I0804 00:32:30.942878   21140 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:32:30.943469   21140 main.go:141] libmachine: Using API Version  1
	I0804 00:32:30.943490   21140 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:32:30.943821   21140 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:32:30.943988   21140 main.go:141] libmachine: (ha-230158) Calling .GetMachineName
	I0804 00:32:30.944139   21140 main.go:141] libmachine: (ha-230158) Calling .DriverName
	I0804 00:32:30.944285   21140 start.go:159] libmachine.API.Create for "ha-230158" (driver="kvm2")
	I0804 00:32:30.944309   21140 client.go:168] LocalClient.Create starting
	I0804 00:32:30.944336   21140 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem
	I0804 00:32:30.944365   21140 main.go:141] libmachine: Decoding PEM data...
	I0804 00:32:30.944378   21140 main.go:141] libmachine: Parsing certificate...
	I0804 00:32:30.944432   21140 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19364-3947/.minikube/certs/cert.pem
	I0804 00:32:30.944453   21140 main.go:141] libmachine: Decoding PEM data...
	I0804 00:32:30.944465   21140 main.go:141] libmachine: Parsing certificate...
	I0804 00:32:30.944480   21140 main.go:141] libmachine: Running pre-create checks...
	I0804 00:32:30.944489   21140 main.go:141] libmachine: (ha-230158) Calling .PreCreateCheck
	I0804 00:32:30.944788   21140 main.go:141] libmachine: (ha-230158) Calling .GetConfigRaw
	I0804 00:32:30.945189   21140 main.go:141] libmachine: Creating machine...
	I0804 00:32:30.945217   21140 main.go:141] libmachine: (ha-230158) Calling .Create
	I0804 00:32:30.945352   21140 main.go:141] libmachine: (ha-230158) Creating KVM machine...
	I0804 00:32:30.946565   21140 main.go:141] libmachine: (ha-230158) DBG | found existing default KVM network
	I0804 00:32:30.947248   21140 main.go:141] libmachine: (ha-230158) DBG | I0804 00:32:30.947097   21164 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1e0}
	I0804 00:32:30.947281   21140 main.go:141] libmachine: (ha-230158) DBG | created network xml: 
	I0804 00:32:30.947299   21140 main.go:141] libmachine: (ha-230158) DBG | <network>
	I0804 00:32:30.947308   21140 main.go:141] libmachine: (ha-230158) DBG |   <name>mk-ha-230158</name>
	I0804 00:32:30.947316   21140 main.go:141] libmachine: (ha-230158) DBG |   <dns enable='no'/>
	I0804 00:32:30.947323   21140 main.go:141] libmachine: (ha-230158) DBG |   
	I0804 00:32:30.947335   21140 main.go:141] libmachine: (ha-230158) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0804 00:32:30.947344   21140 main.go:141] libmachine: (ha-230158) DBG |     <dhcp>
	I0804 00:32:30.947354   21140 main.go:141] libmachine: (ha-230158) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0804 00:32:30.947367   21140 main.go:141] libmachine: (ha-230158) DBG |     </dhcp>
	I0804 00:32:30.947404   21140 main.go:141] libmachine: (ha-230158) DBG |   </ip>
	I0804 00:32:30.947418   21140 main.go:141] libmachine: (ha-230158) DBG |   
	I0804 00:32:30.947424   21140 main.go:141] libmachine: (ha-230158) DBG | </network>
	I0804 00:32:30.947429   21140 main.go:141] libmachine: (ha-230158) DBG | 
	I0804 00:32:30.952537   21140 main.go:141] libmachine: (ha-230158) DBG | trying to create private KVM network mk-ha-230158 192.168.39.0/24...
	I0804 00:32:31.015570   21140 main.go:141] libmachine: (ha-230158) DBG | private KVM network mk-ha-230158 192.168.39.0/24 created
	I0804 00:32:31.015600   21140 main.go:141] libmachine: (ha-230158) Setting up store path in /home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158 ...
	I0804 00:32:31.015614   21140 main.go:141] libmachine: (ha-230158) DBG | I0804 00:32:31.015548   21164 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19364-3947/.minikube
	I0804 00:32:31.015633   21140 main.go:141] libmachine: (ha-230158) Building disk image from file:///home/jenkins/minikube-integration/19364-3947/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0804 00:32:31.015705   21140 main.go:141] libmachine: (ha-230158) Downloading /home/jenkins/minikube-integration/19364-3947/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19364-3947/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0804 00:32:31.252936   21140 main.go:141] libmachine: (ha-230158) DBG | I0804 00:32:31.252797   21164 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158/id_rsa...
	I0804 00:32:31.559361   21140 main.go:141] libmachine: (ha-230158) DBG | I0804 00:32:31.559217   21164 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158/ha-230158.rawdisk...
	I0804 00:32:31.559386   21140 main.go:141] libmachine: (ha-230158) DBG | Writing magic tar header
	I0804 00:32:31.559396   21140 main.go:141] libmachine: (ha-230158) DBG | Writing SSH key tar header
	I0804 00:32:31.559404   21140 main.go:141] libmachine: (ha-230158) DBG | I0804 00:32:31.559340   21164 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158 ...
	I0804 00:32:31.559525   21140 main.go:141] libmachine: (ha-230158) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158
	I0804 00:32:31.559557   21140 main.go:141] libmachine: (ha-230158) Setting executable bit set on /home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158 (perms=drwx------)
	I0804 00:32:31.559573   21140 main.go:141] libmachine: (ha-230158) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-3947/.minikube/machines
	I0804 00:32:31.559638   21140 main.go:141] libmachine: (ha-230158) Setting executable bit set on /home/jenkins/minikube-integration/19364-3947/.minikube/machines (perms=drwxr-xr-x)
	I0804 00:32:31.559673   21140 main.go:141] libmachine: (ha-230158) Setting executable bit set on /home/jenkins/minikube-integration/19364-3947/.minikube (perms=drwxr-xr-x)
	I0804 00:32:31.559685   21140 main.go:141] libmachine: (ha-230158) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-3947/.minikube
	I0804 00:32:31.559705   21140 main.go:141] libmachine: (ha-230158) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-3947
	I0804 00:32:31.559718   21140 main.go:141] libmachine: (ha-230158) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0804 00:32:31.559732   21140 main.go:141] libmachine: (ha-230158) DBG | Checking permissions on dir: /home/jenkins
	I0804 00:32:31.559747   21140 main.go:141] libmachine: (ha-230158) Setting executable bit set on /home/jenkins/minikube-integration/19364-3947 (perms=drwxrwxr-x)
	I0804 00:32:31.559760   21140 main.go:141] libmachine: (ha-230158) DBG | Checking permissions on dir: /home
	I0804 00:32:31.559783   21140 main.go:141] libmachine: (ha-230158) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0804 00:32:31.559796   21140 main.go:141] libmachine: (ha-230158) DBG | Skipping /home - not owner
	I0804 00:32:31.559814   21140 main.go:141] libmachine: (ha-230158) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0804 00:32:31.559825   21140 main.go:141] libmachine: (ha-230158) Creating domain...
	I0804 00:32:31.561006   21140 main.go:141] libmachine: (ha-230158) define libvirt domain using xml: 
	I0804 00:32:31.561039   21140 main.go:141] libmachine: (ha-230158) <domain type='kvm'>
	I0804 00:32:31.561049   21140 main.go:141] libmachine: (ha-230158)   <name>ha-230158</name>
	I0804 00:32:31.561057   21140 main.go:141] libmachine: (ha-230158)   <memory unit='MiB'>2200</memory>
	I0804 00:32:31.561067   21140 main.go:141] libmachine: (ha-230158)   <vcpu>2</vcpu>
	I0804 00:32:31.561072   21140 main.go:141] libmachine: (ha-230158)   <features>
	I0804 00:32:31.561082   21140 main.go:141] libmachine: (ha-230158)     <acpi/>
	I0804 00:32:31.561086   21140 main.go:141] libmachine: (ha-230158)     <apic/>
	I0804 00:32:31.561092   21140 main.go:141] libmachine: (ha-230158)     <pae/>
	I0804 00:32:31.561101   21140 main.go:141] libmachine: (ha-230158)     
	I0804 00:32:31.561106   21140 main.go:141] libmachine: (ha-230158)   </features>
	I0804 00:32:31.561111   21140 main.go:141] libmachine: (ha-230158)   <cpu mode='host-passthrough'>
	I0804 00:32:31.561119   21140 main.go:141] libmachine: (ha-230158)   
	I0804 00:32:31.561123   21140 main.go:141] libmachine: (ha-230158)   </cpu>
	I0804 00:32:31.561146   21140 main.go:141] libmachine: (ha-230158)   <os>
	I0804 00:32:31.561165   21140 main.go:141] libmachine: (ha-230158)     <type>hvm</type>
	I0804 00:32:31.561172   21140 main.go:141] libmachine: (ha-230158)     <boot dev='cdrom'/>
	I0804 00:32:31.561187   21140 main.go:141] libmachine: (ha-230158)     <boot dev='hd'/>
	I0804 00:32:31.561198   21140 main.go:141] libmachine: (ha-230158)     <bootmenu enable='no'/>
	I0804 00:32:31.561213   21140 main.go:141] libmachine: (ha-230158)   </os>
	I0804 00:32:31.561219   21140 main.go:141] libmachine: (ha-230158)   <devices>
	I0804 00:32:31.561226   21140 main.go:141] libmachine: (ha-230158)     <disk type='file' device='cdrom'>
	I0804 00:32:31.561235   21140 main.go:141] libmachine: (ha-230158)       <source file='/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158/boot2docker.iso'/>
	I0804 00:32:31.561244   21140 main.go:141] libmachine: (ha-230158)       <target dev='hdc' bus='scsi'/>
	I0804 00:32:31.561249   21140 main.go:141] libmachine: (ha-230158)       <readonly/>
	I0804 00:32:31.561256   21140 main.go:141] libmachine: (ha-230158)     </disk>
	I0804 00:32:31.561261   21140 main.go:141] libmachine: (ha-230158)     <disk type='file' device='disk'>
	I0804 00:32:31.561272   21140 main.go:141] libmachine: (ha-230158)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0804 00:32:31.561280   21140 main.go:141] libmachine: (ha-230158)       <source file='/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158/ha-230158.rawdisk'/>
	I0804 00:32:31.561285   21140 main.go:141] libmachine: (ha-230158)       <target dev='hda' bus='virtio'/>
	I0804 00:32:31.561292   21140 main.go:141] libmachine: (ha-230158)     </disk>
	I0804 00:32:31.561296   21140 main.go:141] libmachine: (ha-230158)     <interface type='network'>
	I0804 00:32:31.561305   21140 main.go:141] libmachine: (ha-230158)       <source network='mk-ha-230158'/>
	I0804 00:32:31.561309   21140 main.go:141] libmachine: (ha-230158)       <model type='virtio'/>
	I0804 00:32:31.561334   21140 main.go:141] libmachine: (ha-230158)     </interface>
	I0804 00:32:31.561356   21140 main.go:141] libmachine: (ha-230158)     <interface type='network'>
	I0804 00:32:31.561367   21140 main.go:141] libmachine: (ha-230158)       <source network='default'/>
	I0804 00:32:31.561377   21140 main.go:141] libmachine: (ha-230158)       <model type='virtio'/>
	I0804 00:32:31.561386   21140 main.go:141] libmachine: (ha-230158)     </interface>
	I0804 00:32:31.561396   21140 main.go:141] libmachine: (ha-230158)     <serial type='pty'>
	I0804 00:32:31.561405   21140 main.go:141] libmachine: (ha-230158)       <target port='0'/>
	I0804 00:32:31.561412   21140 main.go:141] libmachine: (ha-230158)     </serial>
	I0804 00:32:31.561418   21140 main.go:141] libmachine: (ha-230158)     <console type='pty'>
	I0804 00:32:31.561433   21140 main.go:141] libmachine: (ha-230158)       <target type='serial' port='0'/>
	I0804 00:32:31.561445   21140 main.go:141] libmachine: (ha-230158)     </console>
	I0804 00:32:31.561455   21140 main.go:141] libmachine: (ha-230158)     <rng model='virtio'>
	I0804 00:32:31.561464   21140 main.go:141] libmachine: (ha-230158)       <backend model='random'>/dev/random</backend>
	I0804 00:32:31.561482   21140 main.go:141] libmachine: (ha-230158)     </rng>
	I0804 00:32:31.561492   21140 main.go:141] libmachine: (ha-230158)     
	I0804 00:32:31.561496   21140 main.go:141] libmachine: (ha-230158)     
	I0804 00:32:31.561505   21140 main.go:141] libmachine: (ha-230158)   </devices>
	I0804 00:32:31.561515   21140 main.go:141] libmachine: (ha-230158) </domain>
	I0804 00:32:31.561527   21140 main.go:141] libmachine: (ha-230158) 
	I0804 00:32:31.565606   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:0e:1a:c8 in network default
	I0804 00:32:31.566145   21140 main.go:141] libmachine: (ha-230158) Ensuring networks are active...
	I0804 00:32:31.566160   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:32:31.566849   21140 main.go:141] libmachine: (ha-230158) Ensuring network default is active
	I0804 00:32:31.567149   21140 main.go:141] libmachine: (ha-230158) Ensuring network mk-ha-230158 is active
	I0804 00:32:31.567594   21140 main.go:141] libmachine: (ha-230158) Getting domain xml...
	I0804 00:32:31.568314   21140 main.go:141] libmachine: (ha-230158) Creating domain...
	I0804 00:32:32.752103   21140 main.go:141] libmachine: (ha-230158) Waiting to get IP...
	I0804 00:32:32.752842   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:32:32.753189   21140 main.go:141] libmachine: (ha-230158) DBG | unable to find current IP address of domain ha-230158 in network mk-ha-230158
	I0804 00:32:32.753225   21140 main.go:141] libmachine: (ha-230158) DBG | I0804 00:32:32.753166   21164 retry.go:31] will retry after 301.695034ms: waiting for machine to come up
	I0804 00:32:33.056838   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:32:33.057343   21140 main.go:141] libmachine: (ha-230158) DBG | unable to find current IP address of domain ha-230158 in network mk-ha-230158
	I0804 00:32:33.057374   21140 main.go:141] libmachine: (ha-230158) DBG | I0804 00:32:33.057292   21164 retry.go:31] will retry after 345.614204ms: waiting for machine to come up
	I0804 00:32:33.405071   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:32:33.405512   21140 main.go:141] libmachine: (ha-230158) DBG | unable to find current IP address of domain ha-230158 in network mk-ha-230158
	I0804 00:32:33.405539   21140 main.go:141] libmachine: (ha-230158) DBG | I0804 00:32:33.405464   21164 retry.go:31] will retry after 316.091612ms: waiting for machine to come up
	I0804 00:32:33.723721   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:32:33.724168   21140 main.go:141] libmachine: (ha-230158) DBG | unable to find current IP address of domain ha-230158 in network mk-ha-230158
	I0804 00:32:33.724194   21140 main.go:141] libmachine: (ha-230158) DBG | I0804 00:32:33.724122   21164 retry.go:31] will retry after 558.911264ms: waiting for machine to come up
	I0804 00:32:34.284352   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:32:34.284769   21140 main.go:141] libmachine: (ha-230158) DBG | unable to find current IP address of domain ha-230158 in network mk-ha-230158
	I0804 00:32:34.284790   21140 main.go:141] libmachine: (ha-230158) DBG | I0804 00:32:34.284728   21164 retry.go:31] will retry after 465.210228ms: waiting for machine to come up
	I0804 00:32:34.751423   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:32:34.751758   21140 main.go:141] libmachine: (ha-230158) DBG | unable to find current IP address of domain ha-230158 in network mk-ha-230158
	I0804 00:32:34.751786   21140 main.go:141] libmachine: (ha-230158) DBG | I0804 00:32:34.751726   21164 retry.go:31] will retry after 609.962342ms: waiting for machine to come up
	I0804 00:32:35.363533   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:32:35.363913   21140 main.go:141] libmachine: (ha-230158) DBG | unable to find current IP address of domain ha-230158 in network mk-ha-230158
	I0804 00:32:35.363947   21140 main.go:141] libmachine: (ha-230158) DBG | I0804 00:32:35.363876   21164 retry.go:31] will retry after 731.983307ms: waiting for machine to come up
	I0804 00:32:36.097612   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:32:36.098025   21140 main.go:141] libmachine: (ha-230158) DBG | unable to find current IP address of domain ha-230158 in network mk-ha-230158
	I0804 00:32:36.098052   21140 main.go:141] libmachine: (ha-230158) DBG | I0804 00:32:36.097985   21164 retry.go:31] will retry after 1.047630115s: waiting for machine to come up
	I0804 00:32:37.147182   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:32:37.147727   21140 main.go:141] libmachine: (ha-230158) DBG | unable to find current IP address of domain ha-230158 in network mk-ha-230158
	I0804 00:32:37.147766   21140 main.go:141] libmachine: (ha-230158) DBG | I0804 00:32:37.147690   21164 retry.go:31] will retry after 1.221202371s: waiting for machine to come up
	I0804 00:32:38.371009   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:32:38.371502   21140 main.go:141] libmachine: (ha-230158) DBG | unable to find current IP address of domain ha-230158 in network mk-ha-230158
	I0804 00:32:38.371531   21140 main.go:141] libmachine: (ha-230158) DBG | I0804 00:32:38.371443   21164 retry.go:31] will retry after 2.01003947s: waiting for machine to come up
	I0804 00:32:40.384779   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:32:40.385213   21140 main.go:141] libmachine: (ha-230158) DBG | unable to find current IP address of domain ha-230158 in network mk-ha-230158
	I0804 00:32:40.385237   21140 main.go:141] libmachine: (ha-230158) DBG | I0804 00:32:40.385155   21164 retry.go:31] will retry after 2.043530448s: waiting for machine to come up
	I0804 00:32:42.430015   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:32:42.430527   21140 main.go:141] libmachine: (ha-230158) DBG | unable to find current IP address of domain ha-230158 in network mk-ha-230158
	I0804 00:32:42.430553   21140 main.go:141] libmachine: (ha-230158) DBG | I0804 00:32:42.430486   21164 retry.go:31] will retry after 2.637093898s: waiting for machine to come up
	I0804 00:32:45.071390   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:32:45.071939   21140 main.go:141] libmachine: (ha-230158) DBG | unable to find current IP address of domain ha-230158 in network mk-ha-230158
	I0804 00:32:45.071962   21140 main.go:141] libmachine: (ha-230158) DBG | I0804 00:32:45.071899   21164 retry.go:31] will retry after 3.860426233s: waiting for machine to come up
	I0804 00:32:48.936168   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:32:48.936555   21140 main.go:141] libmachine: (ha-230158) DBG | unable to find current IP address of domain ha-230158 in network mk-ha-230158
	I0804 00:32:48.936574   21140 main.go:141] libmachine: (ha-230158) DBG | I0804 00:32:48.936510   21164 retry.go:31] will retry after 5.157668556s: waiting for machine to come up
	I0804 00:32:54.097780   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:32:54.098254   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has current primary IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:32:54.098278   21140 main.go:141] libmachine: (ha-230158) Found IP for machine: 192.168.39.132
	I0804 00:32:54.098291   21140 main.go:141] libmachine: (ha-230158) Reserving static IP address...
	I0804 00:32:54.098729   21140 main.go:141] libmachine: (ha-230158) DBG | unable to find host DHCP lease matching {name: "ha-230158", mac: "52:54:00:a9:92:75", ip: "192.168.39.132"} in network mk-ha-230158
	I0804 00:32:54.167146   21140 main.go:141] libmachine: (ha-230158) DBG | Getting to WaitForSSH function...
	I0804 00:32:54.167178   21140 main.go:141] libmachine: (ha-230158) Reserved static IP address: 192.168.39.132
	I0804 00:32:54.167210   21140 main.go:141] libmachine: (ha-230158) Waiting for SSH to be available...
	I0804 00:32:54.169968   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:32:54.170456   21140 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a9:92:75}
	I0804 00:32:54.170483   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:32:54.170647   21140 main.go:141] libmachine: (ha-230158) DBG | Using SSH client type: external
	I0804 00:32:54.170673   21140 main.go:141] libmachine: (ha-230158) DBG | Using SSH private key: /home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158/id_rsa (-rw-------)
	I0804 00:32:54.170698   21140 main.go:141] libmachine: (ha-230158) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.132 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0804 00:32:54.170707   21140 main.go:141] libmachine: (ha-230158) DBG | About to run SSH command:
	I0804 00:32:54.170724   21140 main.go:141] libmachine: (ha-230158) DBG | exit 0
	I0804 00:32:54.294070   21140 main.go:141] libmachine: (ha-230158) DBG | SSH cmd err, output: <nil>: 
	I0804 00:32:54.294349   21140 main.go:141] libmachine: (ha-230158) KVM machine creation complete!
	I0804 00:32:54.294681   21140 main.go:141] libmachine: (ha-230158) Calling .GetConfigRaw
	I0804 00:32:54.295181   21140 main.go:141] libmachine: (ha-230158) Calling .DriverName
	I0804 00:32:54.295461   21140 main.go:141] libmachine: (ha-230158) Calling .DriverName
	I0804 00:32:54.295648   21140 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0804 00:32:54.295663   21140 main.go:141] libmachine: (ha-230158) Calling .GetState
	I0804 00:32:54.296807   21140 main.go:141] libmachine: Detecting operating system of created instance...
	I0804 00:32:54.296822   21140 main.go:141] libmachine: Waiting for SSH to be available...
	I0804 00:32:54.296827   21140 main.go:141] libmachine: Getting to WaitForSSH function...
	I0804 00:32:54.296832   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHHostname
	I0804 00:32:54.299017   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:32:54.299319   21140 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
	I0804 00:32:54.299341   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:32:54.299424   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHPort
	I0804 00:32:54.299607   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
	I0804 00:32:54.299762   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
	I0804 00:32:54.299937   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHUsername
	I0804 00:32:54.300060   21140 main.go:141] libmachine: Using SSH client type: native
	I0804 00:32:54.300244   21140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0804 00:32:54.300256   21140 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0804 00:32:54.405542   21140 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 00:32:54.405565   21140 main.go:141] libmachine: Detecting the provisioner...
	I0804 00:32:54.405575   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHHostname
	I0804 00:32:54.407782   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:32:54.408139   21140 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
	I0804 00:32:54.408168   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:32:54.408286   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHPort
	I0804 00:32:54.408492   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
	I0804 00:32:54.408647   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
	I0804 00:32:54.408783   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHUsername
	I0804 00:32:54.408938   21140 main.go:141] libmachine: Using SSH client type: native
	I0804 00:32:54.409095   21140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0804 00:32:54.409105   21140 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0804 00:32:54.514801   21140 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0804 00:32:54.514871   21140 main.go:141] libmachine: found compatible host: buildroot
	I0804 00:32:54.514884   21140 main.go:141] libmachine: Provisioning with buildroot...
	I0804 00:32:54.514896   21140 main.go:141] libmachine: (ha-230158) Calling .GetMachineName
	I0804 00:32:54.515131   21140 buildroot.go:166] provisioning hostname "ha-230158"
	I0804 00:32:54.515160   21140 main.go:141] libmachine: (ha-230158) Calling .GetMachineName
	I0804 00:32:54.515363   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHHostname
	I0804 00:32:54.517892   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:32:54.518220   21140 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
	I0804 00:32:54.518267   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:32:54.518438   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHPort
	I0804 00:32:54.518621   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
	I0804 00:32:54.518792   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
	I0804 00:32:54.518962   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHUsername
	I0804 00:32:54.519189   21140 main.go:141] libmachine: Using SSH client type: native
	I0804 00:32:54.519366   21140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0804 00:32:54.519384   21140 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-230158 && echo "ha-230158" | sudo tee /etc/hostname
	I0804 00:32:54.640261   21140 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-230158
	
	I0804 00:32:54.640282   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHHostname
	I0804 00:32:54.642938   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:32:54.643365   21140 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
	I0804 00:32:54.643386   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:32:54.643520   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHPort
	I0804 00:32:54.643683   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
	I0804 00:32:54.643833   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
	I0804 00:32:54.643976   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHUsername
	I0804 00:32:54.644169   21140 main.go:141] libmachine: Using SSH client type: native
	I0804 00:32:54.644351   21140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0804 00:32:54.644371   21140 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-230158' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-230158/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-230158' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 00:32:54.758999   21140 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 00:32:54.759031   21140 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-3947/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-3947/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-3947/.minikube}
	I0804 00:32:54.759061   21140 buildroot.go:174] setting up certificates
	I0804 00:32:54.759070   21140 provision.go:84] configureAuth start
	I0804 00:32:54.759079   21140 main.go:141] libmachine: (ha-230158) Calling .GetMachineName
	I0804 00:32:54.759335   21140 main.go:141] libmachine: (ha-230158) Calling .GetIP
	I0804 00:32:54.761860   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:32:54.762208   21140 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
	I0804 00:32:54.762254   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:32:54.762353   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHHostname
	I0804 00:32:54.764447   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:32:54.764735   21140 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
	I0804 00:32:54.764777   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:32:54.764853   21140 provision.go:143] copyHostCerts
	I0804 00:32:54.764875   21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19364-3947/.minikube/ca.pem
	I0804 00:32:54.764913   21140 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-3947/.minikube/ca.pem, removing ...
	I0804 00:32:54.764921   21140 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-3947/.minikube/ca.pem
	I0804 00:32:54.764981   21140 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-3947/.minikube/ca.pem (1082 bytes)
	I0804 00:32:54.765047   21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19364-3947/.minikube/cert.pem
	I0804 00:32:54.765064   21140 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-3947/.minikube/cert.pem, removing ...
	I0804 00:32:54.765070   21140 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-3947/.minikube/cert.pem
	I0804 00:32:54.765091   21140 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-3947/.minikube/cert.pem (1123 bytes)
	I0804 00:32:54.765129   21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19364-3947/.minikube/key.pem
	I0804 00:32:54.765145   21140 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-3947/.minikube/key.pem, removing ...
	I0804 00:32:54.765150   21140 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-3947/.minikube/key.pem
	I0804 00:32:54.765171   21140 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-3947/.minikube/key.pem (1679 bytes)
	I0804 00:32:54.765212   21140 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-3947/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca-key.pem org=jenkins.ha-230158 san=[127.0.0.1 192.168.39.132 ha-230158 localhost minikube]
	I0804 00:32:54.838410   21140 provision.go:177] copyRemoteCerts
	I0804 00:32:54.838457   21140 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 00:32:54.838481   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHHostname
	I0804 00:32:54.840788   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:32:54.841102   21140 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
	I0804 00:32:54.841131   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:32:54.841270   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHPort
	I0804 00:32:54.841468   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
	I0804 00:32:54.841641   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHUsername
	I0804 00:32:54.841763   21140 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158/id_rsa Username:docker}
	I0804 00:32:54.924469   21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0804 00:32:54.924532   21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 00:32:54.948277   21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0804 00:32:54.948339   21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0804 00:32:54.971889   21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0804 00:32:54.971954   21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0804 00:32:54.995038   21140 provision.go:87] duration metric: took 235.956813ms to configureAuth
	I0804 00:32:54.995085   21140 buildroot.go:189] setting minikube options for container-runtime
	I0804 00:32:54.995245   21140 config.go:182] Loaded profile config "ha-230158": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0804 00:32:54.995269   21140 main.go:141] libmachine: (ha-230158) Calling .DriverName
	I0804 00:32:54.995535   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHHostname
	I0804 00:32:54.998409   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:32:54.998785   21140 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
	I0804 00:32:54.998809   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:32:54.998968   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHPort
	I0804 00:32:54.999136   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
	I0804 00:32:54.999273   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
	I0804 00:32:54.999394   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHUsername
	I0804 00:32:54.999563   21140 main.go:141] libmachine: Using SSH client type: native
	I0804 00:32:54.999719   21140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0804 00:32:54.999730   21140 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0804 00:32:55.107480   21140 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0804 00:32:55.107506   21140 buildroot.go:70] root file system type: tmpfs
	I0804 00:32:55.107642   21140 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0804 00:32:55.107667   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHHostname
	I0804 00:32:55.110196   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:32:55.110640   21140 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
	I0804 00:32:55.110660   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:32:55.110846   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHPort
	I0804 00:32:55.111020   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
	I0804 00:32:55.111149   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
	I0804 00:32:55.111265   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHUsername
	I0804 00:32:55.111429   21140 main.go:141] libmachine: Using SSH client type: native
	I0804 00:32:55.111608   21140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0804 00:32:55.111668   21140 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0804 00:32:55.228500   21140 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0804 00:32:55.228527   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHHostname
	I0804 00:32:55.231052   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:32:55.231414   21140 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
	I0804 00:32:55.231450   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:32:55.231603   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHPort
	I0804 00:32:55.231773   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
	I0804 00:32:55.231921   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
	I0804 00:32:55.232099   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHUsername
	I0804 00:32:55.232281   21140 main.go:141] libmachine: Using SSH client type: native
	I0804 00:32:55.232491   21140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0804 00:32:55.232517   21140 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0804 00:32:56.991416   21140 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0804 00:32:56.991440   21140 main.go:141] libmachine: Checking connection to Docker...
	I0804 00:32:56.991448   21140 main.go:141] libmachine: (ha-230158) Calling .GetURL
	I0804 00:32:56.992552   21140 main.go:141] libmachine: (ha-230158) DBG | Using libvirt version 6000000
	I0804 00:32:56.994460   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:32:56.994745   21140 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
	I0804 00:32:56.994773   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:32:56.994931   21140 main.go:141] libmachine: Docker is up and running!
	I0804 00:32:56.994949   21140 main.go:141] libmachine: Reticulating splines...
	I0804 00:32:56.994957   21140 client.go:171] duration metric: took 26.050639623s to LocalClient.Create
	I0804 00:32:56.994980   21140 start.go:167] duration metric: took 26.050695026s to libmachine.API.Create "ha-230158"
	I0804 00:32:56.994992   21140 start.go:293] postStartSetup for "ha-230158" (driver="kvm2")
	I0804 00:32:56.995003   21140 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 00:32:56.995019   21140 main.go:141] libmachine: (ha-230158) Calling .DriverName
	I0804 00:32:56.995233   21140 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 00:32:56.995259   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHHostname
	I0804 00:32:56.997109   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:32:56.997414   21140 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
	I0804 00:32:56.997444   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:32:56.997570   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHPort
	I0804 00:32:56.997762   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
	I0804 00:32:56.997937   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHUsername
	I0804 00:32:56.998085   21140 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158/id_rsa Username:docker}
	I0804 00:32:57.080881   21140 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 00:32:57.084955   21140 info.go:137] Remote host: Buildroot 2023.02.9
	I0804 00:32:57.084974   21140 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-3947/.minikube/addons for local assets ...
	I0804 00:32:57.085029   21140 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-3947/.minikube/files for local assets ...
	I0804 00:32:57.085100   21140 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-3947/.minikube/files/etc/ssl/certs/111362.pem -> 111362.pem in /etc/ssl/certs
	I0804 00:32:57.085109   21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/files/etc/ssl/certs/111362.pem -> /etc/ssl/certs/111362.pem
	I0804 00:32:57.085190   21140 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0804 00:32:57.094432   21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/files/etc/ssl/certs/111362.pem --> /etc/ssl/certs/111362.pem (1708 bytes)
	I0804 00:32:57.116709   21140 start.go:296] duration metric: took 121.696868ms for postStartSetup
	I0804 00:32:57.116750   21140 main.go:141] libmachine: (ha-230158) Calling .GetConfigRaw
	I0804 00:32:57.117323   21140 main.go:141] libmachine: (ha-230158) Calling .GetIP
	I0804 00:32:57.119831   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:32:57.120165   21140 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
	I0804 00:32:57.120212   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:32:57.120406   21140 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/config.json ...
	I0804 00:32:57.120581   21140 start.go:128] duration metric: took 26.193880441s to createHost
	I0804 00:32:57.120604   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHHostname
	I0804 00:32:57.123098   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:32:57.123407   21140 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
	I0804 00:32:57.123430   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:32:57.123566   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHPort
	I0804 00:32:57.123742   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
	I0804 00:32:57.123910   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
	I0804 00:32:57.124071   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHUsername
	I0804 00:32:57.124217   21140 main.go:141] libmachine: Using SSH client type: native
	I0804 00:32:57.124377   21140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0804 00:32:57.124389   21140 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0804 00:32:57.231043   21140 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722731577.209951211
	
	I0804 00:32:57.231087   21140 fix.go:216] guest clock: 1722731577.209951211
	I0804 00:32:57.231098   21140 fix.go:229] Guest: 2024-08-04 00:32:57.209951211 +0000 UTC Remote: 2024-08-04 00:32:57.12059219 +0000 UTC m=+26.297674596 (delta=89.359021ms)
	I0804 00:32:57.231126   21140 fix.go:200] guest clock delta is within tolerance: 89.359021ms
	I0804 00:32:57.231133   21140 start.go:83] releasing machines lock for "ha-230158", held for 26.304539197s
	I0804 00:32:57.231163   21140 main.go:141] libmachine: (ha-230158) Calling .DriverName
	I0804 00:32:57.231428   21140 main.go:141] libmachine: (ha-230158) Calling .GetIP
	I0804 00:32:57.234051   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:32:57.234508   21140 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
	I0804 00:32:57.234537   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:32:57.234705   21140 main.go:141] libmachine: (ha-230158) Calling .DriverName
	I0804 00:32:57.235271   21140 main.go:141] libmachine: (ha-230158) Calling .DriverName
	I0804 00:32:57.235452   21140 main.go:141] libmachine: (ha-230158) Calling .DriverName
	I0804 00:32:57.235547   21140 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 00:32:57.235576   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHHostname
	I0804 00:32:57.235666   21140 ssh_runner.go:195] Run: cat /version.json
	I0804 00:32:57.235688   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHHostname
	I0804 00:32:57.238053   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:32:57.238116   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:32:57.238447   21140 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
	I0804 00:32:57.238471   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:32:57.238495   21140 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
	I0804 00:32:57.238524   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:32:57.238607   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHPort
	I0804 00:32:57.238719   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHPort
	I0804 00:32:57.238788   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
	I0804 00:32:57.238859   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
	I0804 00:32:57.238933   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHUsername
	I0804 00:32:57.238996   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHUsername
	I0804 00:32:57.239052   21140 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158/id_rsa Username:docker}
	I0804 00:32:57.239091   21140 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158/id_rsa Username:docker}
	I0804 00:32:57.335233   21140 ssh_runner.go:195] Run: systemctl --version
	I0804 00:32:57.341045   21140 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0804 00:32:57.346598   21140 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0804 00:32:57.346655   21140 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 00:32:57.363478   21140 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0804 00:32:57.363507   21140 start.go:495] detecting cgroup driver to use...
	I0804 00:32:57.363613   21140 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 00:32:57.381550   21140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0804 00:32:57.392232   21140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0804 00:32:57.402697   21140 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0804 00:32:57.402741   21140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0804 00:32:57.413230   21140 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0804 00:32:57.423689   21140 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0804 00:32:57.433882   21140 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0804 00:32:57.444123   21140 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 00:32:57.454604   21140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0804 00:32:57.464894   21140 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0804 00:32:57.475126   21140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0804 00:32:57.485555   21140 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 00:32:57.494566   21140 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 00:32:57.503704   21140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:32:57.609739   21140 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0804 00:32:57.634502   21140 start.go:495] detecting cgroup driver to use...
	I0804 00:32:57.634579   21140 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0804 00:32:57.649722   21140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0804 00:32:57.663075   21140 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0804 00:32:57.681027   21140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0804 00:32:57.694388   21140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0804 00:32:57.707836   21140 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0804 00:32:57.737257   21140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0804 00:32:57.750381   21140 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 00:32:57.768686   21140 ssh_runner.go:195] Run: which cri-dockerd
	I0804 00:32:57.772533   21140 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0804 00:32:57.781420   21140 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0804 00:32:57.797649   21140 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0804 00:32:57.904330   21140 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0804 00:32:58.015103   21140 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0804 00:32:58.015241   21140 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0804 00:32:58.032390   21140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:32:58.141269   21140 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0804 00:33:00.497232   21140 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.355928533s)
	I0804 00:33:00.497299   21140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0804 00:33:00.511224   21140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0804 00:33:00.524642   21140 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0804 00:33:00.633804   21140 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0804 00:33:00.745087   21140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:33:00.867368   21140 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0804 00:33:00.884032   21140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0804 00:33:00.898059   21140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:33:01.002422   21140 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0804 00:33:01.079045   21140 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0804 00:33:01.079118   21140 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0804 00:33:01.084853   21140 start.go:563] Will wait 60s for crictl version
	I0804 00:33:01.084906   21140 ssh_runner.go:195] Run: which crictl
	I0804 00:33:01.090370   21140 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 00:33:01.127604   21140 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0804 00:33:01.127655   21140 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0804 00:33:01.154224   21140 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0804 00:33:01.177225   21140 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0804 00:33:01.177331   21140 main.go:141] libmachine: (ha-230158) Calling .GetIP
	I0804 00:33:01.180121   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:33:01.180494   21140 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
	I0804 00:33:01.180522   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:33:01.180772   21140 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0804 00:33:01.184959   21140 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:33:01.198426   21140 kubeadm.go:883] updating cluster {Name:ha-230158 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-230158 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.132 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 00:33:01.198549   21140 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0804 00:33:01.198599   21140 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0804 00:33:01.214393   21140 docker.go:685] Got preloaded images: 
	I0804 00:33:01.214411   21140 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.3 wasn't preloaded
	I0804 00:33:01.214450   21140 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0804 00:33:01.225255   21140 ssh_runner.go:195] Run: which lz4
	I0804 00:33:01.229351   21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0804 00:33:01.229451   21140 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0804 00:33:01.233649   21140 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0804 00:33:01.233678   21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359612007 bytes)
	I0804 00:33:02.490141   21140 docker.go:649] duration metric: took 1.260715608s to copy over tarball
	I0804 00:33:02.490208   21140 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0804 00:33:04.338533   21140 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.848304605s)
	I0804 00:33:04.338558   21140 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0804 00:33:04.373097   21140 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0804 00:33:04.383582   21140 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0804 00:33:04.401245   21140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:33:04.526806   21140 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0804 00:33:08.641884   21140 ssh_runner.go:235] Completed: sudo systemctl restart docker: (4.115040619s)
	I0804 00:33:08.642005   21140 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0804 00:33:08.659453   21140 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0804 00:33:08.659474   21140 cache_images.go:84] Images are preloaded, skipping loading
	I0804 00:33:08.659489   21140 kubeadm.go:934] updating node { 192.168.39.132 8443 v1.30.3 docker true true} ...
	I0804 00:33:08.659603   21140 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-230158 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.132
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-230158 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 00:33:08.659661   21140 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0804 00:33:08.716313   21140 cni.go:84] Creating CNI manager for ""
	I0804 00:33:08.716341   21140 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0804 00:33:08.716355   21140 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0804 00:33:08.716382   21140 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.132 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-230158 NodeName:ha-230158 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.132"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.132 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0804 00:33:08.716595   21140 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.132
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-230158"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.132
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.132"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 00:33:08.716626   21140 kube-vip.go:115] generating kube-vip config ...
	I0804 00:33:08.716676   21140 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0804 00:33:08.732205   21140 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0804 00:33:08.732311   21140 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0804 00:33:08.732368   21140 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0804 00:33:08.741857   21140 binaries.go:44] Found k8s binaries, skipping transfer
	I0804 00:33:08.741916   21140 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0804 00:33:08.751330   21140 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I0804 00:33:08.767807   21140 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0804 00:33:08.784009   21140 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0804 00:33:08.800055   21140 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0804 00:33:08.816343   21140 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0804 00:33:08.820140   21140 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:33:08.831454   21140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:33:08.935642   21140 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:33:08.952890   21140 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158 for IP: 192.168.39.132
	I0804 00:33:08.952912   21140 certs.go:194] generating shared ca certs ...
	I0804 00:33:08.952930   21140 certs.go:226] acquiring lock for ca certs: {Name:mkffa482a260ec35b4e7e61a9f84c11349615c10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:33:08.953076   21140 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-3947/.minikube/ca.key
	I0804 00:33:08.953143   21140 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-3947/.minikube/proxy-client-ca.key
	I0804 00:33:08.953157   21140 certs.go:256] generating profile certs ...
	I0804 00:33:08.953237   21140 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/client.key
	I0804 00:33:08.953254   21140 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/client.crt with IP's: []
	I0804 00:33:09.154018   21140 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/client.crt ...
	I0804 00:33:09.154047   21140 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/client.crt: {Name:mk77c87b09a42f8e8aee2ee64e4eb37962023013 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:33:09.154262   21140 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/client.key ...
	I0804 00:33:09.154278   21140 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/client.key: {Name:mkd98ec90d89c2dbad3b99fe7050b3894fffdeed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:33:09.154387   21140 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.key.5e009b09
	I0804 00:33:09.154406   21140 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.crt.5e009b09 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.132 192.168.39.254]
	I0804 00:33:09.252772   21140 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.crt.5e009b09 ...
	I0804 00:33:09.252800   21140 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.crt.5e009b09: {Name:mk8b5b74784bb5e469752a6b2aa491801d503e32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:33:09.252969   21140 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.key.5e009b09 ...
	I0804 00:33:09.252986   21140 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.key.5e009b09: {Name:mk7bee344db6d519ff8e4e621b3b58f319578c73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:33:09.253087   21140 certs.go:381] copying /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.crt.5e009b09 -> /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.crt
	I0804 00:33:09.253190   21140 certs.go:385] copying /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.key.5e009b09 -> /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.key
	I0804 00:33:09.253281   21140 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/proxy-client.key
	I0804 00:33:09.253300   21140 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/proxy-client.crt with IP's: []
	I0804 00:33:09.364891   21140 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/proxy-client.crt ...
	I0804 00:33:09.364920   21140 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/proxy-client.crt: {Name:mk3d390eec4d12ccf4bc093c347188787f985e6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:33:09.365094   21140 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/proxy-client.key ...
	I0804 00:33:09.365109   21140 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/proxy-client.key: {Name:mkf1993957b9d4c0bc8a39fbf94f6893985f9203 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:33:09.365208   21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0804 00:33:09.365232   21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0804 00:33:09.365252   21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0804 00:33:09.365269   21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0804 00:33:09.365289   21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0804 00:33:09.365308   21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0804 00:33:09.365326   21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0804 00:33:09.365344   21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0804 00:33:09.365404   21140 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/11136.pem (1338 bytes)
	W0804 00:33:09.365449   21140 certs.go:480] ignoring /home/jenkins/minikube-integration/19364-3947/.minikube/certs/11136_empty.pem, impossibly tiny 0 bytes
	I0804 00:33:09.365462   21140 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca-key.pem (1679 bytes)
	I0804 00:33:09.365495   21140 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem (1082 bytes)
	I0804 00:33:09.365526   21140 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/cert.pem (1123 bytes)
	I0804 00:33:09.365559   21140 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/key.pem (1679 bytes)
	I0804 00:33:09.365628   21140 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-3947/.minikube/files/etc/ssl/certs/111362.pem (1708 bytes)
	I0804 00:33:09.365673   21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/files/etc/ssl/certs/111362.pem -> /usr/share/ca-certificates/111362.pem
	I0804 00:33:09.365694   21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:33:09.365712   21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/11136.pem -> /usr/share/ca-certificates/11136.pem
	I0804 00:33:09.366273   21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 00:33:09.391653   21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0804 00:33:09.414747   21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 00:33:09.437640   21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0804 00:33:09.460325   21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0804 00:33:09.483242   21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0804 00:33:09.505490   21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 00:33:09.527863   21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0804 00:33:09.550718   21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/files/etc/ssl/certs/111362.pem --> /usr/share/ca-certificates/111362.pem (1708 bytes)
	I0804 00:33:09.573234   21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 00:33:09.595976   21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/certs/11136.pem --> /usr/share/ca-certificates/11136.pem (1338 bytes)
	I0804 00:33:09.618743   21140 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 00:33:09.635017   21140 ssh_runner.go:195] Run: openssl version
	I0804 00:33:09.640910   21140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111362.pem && ln -fs /usr/share/ca-certificates/111362.pem /etc/ssl/certs/111362.pem"
	I0804 00:33:09.651593   21140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111362.pem
	I0804 00:33:09.656397   21140 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  4 00:28 /usr/share/ca-certificates/111362.pem
	I0804 00:33:09.656446   21140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111362.pem
	I0804 00:33:09.662327   21140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111362.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 00:33:09.673051   21140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 00:33:09.683683   21140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:33:09.688002   21140 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  4 00:21 /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:33:09.688043   21140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:33:09.693530   21140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 00:33:09.707715   21140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11136.pem && ln -fs /usr/share/ca-certificates/11136.pem /etc/ssl/certs/11136.pem"
	I0804 00:33:09.718660   21140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11136.pem
	I0804 00:33:09.723160   21140 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  4 00:28 /usr/share/ca-certificates/11136.pem
	I0804 00:33:09.723214   21140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11136.pem
	I0804 00:33:09.729048   21140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11136.pem /etc/ssl/certs/51391683.0"
	I0804 00:33:09.742606   21140 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 00:33:09.746906   21140 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0804 00:33:09.746953   21140 kubeadm.go:392] StartCluster: {Name:ha-230158 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-230158 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.132 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:33:09.747116   21140 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0804 00:33:09.775287   21140 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0804 00:33:09.787393   21140 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0804 00:33:09.797273   21140 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 00:33:09.807040   21140 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 00:33:09.807058   21140 kubeadm.go:157] found existing configuration files:
	
	I0804 00:33:09.807101   21140 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0804 00:33:09.816230   21140 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 00:33:09.816272   21140 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 00:33:09.825781   21140 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0804 00:33:09.834834   21140 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 00:33:09.834872   21140 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 00:33:09.844017   21140 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0804 00:33:09.852963   21140 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 00:33:09.852996   21140 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 00:33:09.862317   21140 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0804 00:33:09.871282   21140 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 00:33:09.871318   21140 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 00:33:09.880621   21140 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0804 00:33:10.099955   21140 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0804 00:33:21.015275   21140 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0804 00:33:21.015361   21140 kubeadm.go:310] [preflight] Running pre-flight checks
	I0804 00:33:21.015466   21140 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0804 00:33:21.015598   21140 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0804 00:33:21.015702   21140 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0804 00:33:21.015791   21140 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0804 00:33:21.017397   21140 out.go:204]   - Generating certificates and keys ...
	I0804 00:33:21.017476   21140 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0804 00:33:21.017534   21140 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0804 00:33:21.017642   21140 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0804 00:33:21.017733   21140 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0804 00:33:21.017817   21140 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0804 00:33:21.017887   21140 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0804 00:33:21.017965   21140 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0804 00:33:21.018249   21140 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-230158 localhost] and IPs [192.168.39.132 127.0.0.1 ::1]
	I0804 00:33:21.018339   21140 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0804 00:33:21.018518   21140 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-230158 localhost] and IPs [192.168.39.132 127.0.0.1 ::1]
	I0804 00:33:21.018618   21140 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0804 00:33:21.018708   21140 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0804 00:33:21.018774   21140 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0804 00:33:21.019023   21140 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0804 00:33:21.019105   21140 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0804 00:33:21.019170   21140 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0804 00:33:21.019243   21140 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0804 00:33:21.019335   21140 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0804 00:33:21.019418   21140 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0804 00:33:21.019546   21140 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0804 00:33:21.019651   21140 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0804 00:33:21.020981   21140 out.go:204]   - Booting up control plane ...
	I0804 00:33:21.021058   21140 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0804 00:33:21.021135   21140 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0804 00:33:21.021222   21140 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0804 00:33:21.021344   21140 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0804 00:33:21.021481   21140 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0804 00:33:21.021526   21140 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0804 00:33:21.021709   21140 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0804 00:33:21.021782   21140 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0804 00:33:21.021861   21140 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 512.629998ms
	I0804 00:33:21.021954   21140 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0804 00:33:21.022015   21140 kubeadm.go:310] [api-check] The API server is healthy after 6.128513733s
	I0804 00:33:21.022108   21140 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0804 00:33:21.022225   21140 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0804 00:33:21.022305   21140 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0804 00:33:21.022485   21140 kubeadm.go:310] [mark-control-plane] Marking the node ha-230158 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0804 00:33:21.022550   21140 kubeadm.go:310] [bootstrap-token] Using token: xdcwsg.p04udedd0rn0a6qg
	I0804 00:33:21.023876   21140 out.go:204]   - Configuring RBAC rules ...
	I0804 00:33:21.023967   21140 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0804 00:33:21.024071   21140 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0804 00:33:21.024234   21140 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0804 00:33:21.024358   21140 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0804 00:33:21.024461   21140 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0804 00:33:21.024544   21140 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0804 00:33:21.024655   21140 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0804 00:33:21.024695   21140 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0804 00:33:21.024774   21140 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0804 00:33:21.024784   21140 kubeadm.go:310] 
	I0804 00:33:21.024832   21140 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0804 00:33:21.024838   21140 kubeadm.go:310] 
	I0804 00:33:21.024916   21140 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0804 00:33:21.024930   21140 kubeadm.go:310] 
	I0804 00:33:21.024972   21140 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0804 00:33:21.025023   21140 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0804 00:33:21.025076   21140 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0804 00:33:21.025091   21140 kubeadm.go:310] 
	I0804 00:33:21.025147   21140 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0804 00:33:21.025155   21140 kubeadm.go:310] 
	I0804 00:33:21.025215   21140 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0804 00:33:21.025222   21140 kubeadm.go:310] 
	I0804 00:33:21.025296   21140 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0804 00:33:21.025361   21140 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0804 00:33:21.025417   21140 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0804 00:33:21.025423   21140 kubeadm.go:310] 
	I0804 00:33:21.025497   21140 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0804 00:33:21.025568   21140 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0804 00:33:21.025579   21140 kubeadm.go:310] 
	I0804 00:33:21.025654   21140 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xdcwsg.p04udedd0rn0a6qg \
	I0804 00:33:21.025762   21140 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:df45234da77be7664dbc18ef5748e1cb4d47aa47bd0026b7b7a8eef37767d0f0 \
	I0804 00:33:21.025785   21140 kubeadm.go:310] 	--control-plane 
	I0804 00:33:21.025792   21140 kubeadm.go:310] 
	I0804 00:33:21.025868   21140 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0804 00:33:21.025875   21140 kubeadm.go:310] 
	I0804 00:33:21.025940   21140 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xdcwsg.p04udedd0rn0a6qg \
	I0804 00:33:21.026039   21140 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:df45234da77be7664dbc18ef5748e1cb4d47aa47bd0026b7b7a8eef37767d0f0 
	I0804 00:33:21.026049   21140 cni.go:84] Creating CNI manager for ""
	I0804 00:33:21.026055   21140 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0804 00:33:21.027626   21140 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0804 00:33:21.028874   21140 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0804 00:33:21.034481   21140 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0804 00:33:21.034495   21140 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0804 00:33:21.054487   21140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0804 00:33:21.377297   21140 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0804 00:33:21.377369   21140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:33:21.377421   21140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-230158 minikube.k8s.io/updated_at=2024_08_04T00_33_21_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=40edfa2c305b426121bc436e9f593c10662235e6 minikube.k8s.io/name=ha-230158 minikube.k8s.io/primary=true
	I0804 00:33:21.392409   21140 ops.go:34] apiserver oom_adj: -16
	I0804 00:33:21.500779   21140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:33:22.001370   21140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:33:22.501148   21140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:33:23.001450   21140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:33:23.500923   21140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:33:24.001430   21140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:33:24.500937   21140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:33:25.001823   21140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:33:25.500942   21140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:33:26.001281   21140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:33:26.501419   21140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:33:27.001535   21140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:33:27.500922   21140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:33:28.001367   21140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:33:28.501606   21140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:33:29.001608   21140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:33:29.501097   21140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:33:30.001320   21140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:33:30.501777   21140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:33:31.001829   21140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:33:31.500852   21140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:33:32.000906   21140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:33:32.501555   21140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:33:33.001108   21140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:33:33.500822   21140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:33:33.599996   21140 kubeadm.go:1113] duration metric: took 12.222679799s to wait for elevateKubeSystemPrivileges
	I0804 00:33:33.600032   21140 kubeadm.go:394] duration metric: took 23.853080946s to StartCluster
	I0804 00:33:33.600052   21140 settings.go:142] acquiring lock: {Name:mk93b1d9065d26901985574a9ad74d7ec3be884d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:33:33.600124   21140 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19364-3947/kubeconfig
	I0804 00:33:33.601002   21140 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-3947/kubeconfig: {Name:mk8868e58184f812ddd7933d7e896763e01aff49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:33:33.601248   21140 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.132 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0804 00:33:33.601268   21140 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0804 00:33:33.601277   21140 start.go:241] waiting for startup goroutines ...
	I0804 00:33:33.601311   21140 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0804 00:33:33.601370   21140 addons.go:69] Setting storage-provisioner=true in profile "ha-230158"
	I0804 00:33:33.601386   21140 addons.go:69] Setting default-storageclass=true in profile "ha-230158"
	I0804 00:33:33.601403   21140 addons.go:234] Setting addon storage-provisioner=true in "ha-230158"
	I0804 00:33:33.601423   21140 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-230158"
	I0804 00:33:33.601446   21140 host.go:66] Checking if "ha-230158" exists ...
	I0804 00:33:33.601526   21140 config.go:182] Loaded profile config "ha-230158": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0804 00:33:33.601761   21140 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:33:33.601797   21140 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:33:33.601853   21140 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:33:33.601892   21140 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:33:33.617179   21140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40405
	I0804 00:33:33.617179   21140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37073
	I0804 00:33:33.617665   21140 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:33:33.617812   21140 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:33:33.618191   21140 main.go:141] libmachine: Using API Version  1
	I0804 00:33:33.618209   21140 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:33:33.618351   21140 main.go:141] libmachine: Using API Version  1
	I0804 00:33:33.618372   21140 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:33:33.618612   21140 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:33:33.618671   21140 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:33:33.618836   21140 main.go:141] libmachine: (ha-230158) Calling .GetState
	I0804 00:33:33.619191   21140 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:33:33.619243   21140 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:33:33.621018   21140 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19364-3947/kubeconfig
	I0804 00:33:33.621359   21140 kapi.go:59] client config for ha-230158: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/client.crt", KeyFile:"/home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/client.key", CAFile:"/home/jenkins/minikube-integration/19364-3947/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0804 00:33:33.621903   21140 cert_rotation.go:137] Starting client certificate rotation controller
	I0804 00:33:33.622134   21140 addons.go:234] Setting addon default-storageclass=true in "ha-230158"
	I0804 00:33:33.622173   21140 host.go:66] Checking if "ha-230158" exists ...
	I0804 00:33:33.622560   21140 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:33:33.622603   21140 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:33:33.634444   21140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37337
	I0804 00:33:33.634887   21140 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:33:33.635446   21140 main.go:141] libmachine: Using API Version  1
	I0804 00:33:33.635474   21140 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:33:33.635799   21140 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:33:33.635966   21140 main.go:141] libmachine: (ha-230158) Calling .GetState
	I0804 00:33:33.637531   21140 main.go:141] libmachine: (ha-230158) Calling .DriverName
	I0804 00:33:33.637685   21140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33185
	I0804 00:33:33.638061   21140 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:33:33.638645   21140 main.go:141] libmachine: Using API Version  1
	I0804 00:33:33.638670   21140 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:33:33.639061   21140 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:33:33.639216   21140 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:33:33.639579   21140 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:33:33.639624   21140 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:33:33.640364   21140 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 00:33:33.640378   21140 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0804 00:33:33.640390   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHHostname
	I0804 00:33:33.642791   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:33:33.643109   21140 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
	I0804 00:33:33.643135   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:33:33.643250   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHPort
	I0804 00:33:33.643417   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
	I0804 00:33:33.643550   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHUsername
	I0804 00:33:33.643694   21140 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158/id_rsa Username:docker}
	I0804 00:33:33.658077   21140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36749
	I0804 00:33:33.658469   21140 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:33:33.659008   21140 main.go:141] libmachine: Using API Version  1
	I0804 00:33:33.659033   21140 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:33:33.659357   21140 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:33:33.659586   21140 main.go:141] libmachine: (ha-230158) Calling .GetState
	I0804 00:33:33.661196   21140 main.go:141] libmachine: (ha-230158) Calling .DriverName
	I0804 00:33:33.661413   21140 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0804 00:33:33.661429   21140 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0804 00:33:33.661445   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHHostname
	I0804 00:33:33.664060   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:33:33.664559   21140 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
	I0804 00:33:33.664584   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:33:33.664668   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHPort
	I0804 00:33:33.664853   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
	I0804 00:33:33.664989   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHUsername
	I0804 00:33:33.665122   21140 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158/id_rsa Username:docker}
	I0804 00:33:33.775943   21140 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0804 00:33:33.788753   21140 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 00:33:33.844831   21140 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0804 00:33:34.311857   21140 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0804 00:33:34.363949   21140 main.go:141] libmachine: Making call to close driver server
	I0804 00:33:34.363978   21140 main.go:141] libmachine: Making call to close driver server
	I0804 00:33:34.363987   21140 main.go:141] libmachine: (ha-230158) Calling .Close
	I0804 00:33:34.363992   21140 main.go:141] libmachine: (ha-230158) Calling .Close
	I0804 00:33:34.364299   21140 main.go:141] libmachine: (ha-230158) DBG | Closing plugin on server side
	I0804 00:33:34.364322   21140 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:33:34.364327   21140 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:33:34.364332   21140 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:33:34.364336   21140 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:33:34.364345   21140 main.go:141] libmachine: Making call to close driver server
	I0804 00:33:34.364349   21140 main.go:141] libmachine: Making call to close driver server
	I0804 00:33:34.364353   21140 main.go:141] libmachine: (ha-230158) Calling .Close
	I0804 00:33:34.364368   21140 main.go:141] libmachine: (ha-230158) Calling .Close
	I0804 00:33:34.364590   21140 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:33:34.364605   21140 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:33:34.364643   21140 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:33:34.364654   21140 main.go:141] libmachine: (ha-230158) DBG | Closing plugin on server side
	I0804 00:33:34.364663   21140 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:33:34.364795   21140 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0804 00:33:34.364810   21140 round_trippers.go:469] Request Headers:
	I0804 00:33:34.364822   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:33:34.364832   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:33:34.374805   21140 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0804 00:33:34.375382   21140 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0804 00:33:34.375397   21140 round_trippers.go:469] Request Headers:
	I0804 00:33:34.375407   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:33:34.375413   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:33:34.375417   21140 round_trippers.go:473]     Content-Type: application/json
	I0804 00:33:34.377733   21140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0804 00:33:34.377876   21140 main.go:141] libmachine: Making call to close driver server
	I0804 00:33:34.377889   21140 main.go:141] libmachine: (ha-230158) Calling .Close
	I0804 00:33:34.378104   21140 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:33:34.378122   21140 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:33:34.379597   21140 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0804 00:33:34.380748   21140 addons.go:510] duration metric: took 779.456208ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0804 00:33:34.380772   21140 start.go:246] waiting for cluster config update ...
	I0804 00:33:34.380781   21140 start.go:255] writing updated cluster config ...
	I0804 00:33:34.382225   21140 out.go:177] 
	I0804 00:33:34.383357   21140 config.go:182] Loaded profile config "ha-230158": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0804 00:33:34.383439   21140 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/config.json ...
	I0804 00:33:34.384860   21140 out.go:177] * Starting "ha-230158-m02" control-plane node in "ha-230158" cluster
	I0804 00:33:34.385928   21140 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0804 00:33:34.385946   21140 cache.go:56] Caching tarball of preloaded images
	I0804 00:33:34.386031   21140 preload.go:172] Found /home/jenkins/minikube-integration/19364-3947/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0804 00:33:34.386044   21140 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0804 00:33:34.386118   21140 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/config.json ...
	I0804 00:33:34.386311   21140 start.go:360] acquireMachinesLock for ha-230158-m02: {Name:mk3c8b650475b5a29be5f1e49e0345d4de7c1632 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0804 00:33:34.386363   21140 start.go:364] duration metric: took 32.811µs to acquireMachinesLock for "ha-230158-m02"
	I0804 00:33:34.386387   21140 start.go:93] Provisioning new machine with config: &{Name:ha-230158 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-230158 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.132 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0804 00:33:34.386466   21140 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0804 00:33:34.387864   21140 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0804 00:33:34.387949   21140 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:33:34.387988   21140 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:33:34.401959   21140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34527
	I0804 00:33:34.402313   21140 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:33:34.402735   21140 main.go:141] libmachine: Using API Version  1
	I0804 00:33:34.402757   21140 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:33:34.403072   21140 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:33:34.403260   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetMachineName
	I0804 00:33:34.403388   21140 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
	I0804 00:33:34.403545   21140 start.go:159] libmachine.API.Create for "ha-230158" (driver="kvm2")
	I0804 00:33:34.403572   21140 client.go:168] LocalClient.Create starting
	I0804 00:33:34.403605   21140 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem
	I0804 00:33:34.403644   21140 main.go:141] libmachine: Decoding PEM data...
	I0804 00:33:34.403667   21140 main.go:141] libmachine: Parsing certificate...
	I0804 00:33:34.403740   21140 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19364-3947/.minikube/certs/cert.pem
	I0804 00:33:34.403766   21140 main.go:141] libmachine: Decoding PEM data...
	I0804 00:33:34.403784   21140 main.go:141] libmachine: Parsing certificate...
	I0804 00:33:34.403810   21140 main.go:141] libmachine: Running pre-create checks...
	I0804 00:33:34.403821   21140 main.go:141] libmachine: (ha-230158-m02) Calling .PreCreateCheck
	I0804 00:33:34.403961   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetConfigRaw
	I0804 00:33:34.405677   21140 main.go:141] libmachine: Creating machine...
	I0804 00:33:34.405699   21140 main.go:141] libmachine: (ha-230158-m02) Calling .Create
	I0804 00:33:34.405841   21140 main.go:141] libmachine: (ha-230158-m02) Creating KVM machine...
	I0804 00:33:34.407026   21140 main.go:141] libmachine: (ha-230158-m02) DBG | found existing default KVM network
	I0804 00:33:34.407168   21140 main.go:141] libmachine: (ha-230158-m02) DBG | found existing private KVM network mk-ha-230158
	I0804 00:33:34.407306   21140 main.go:141] libmachine: (ha-230158-m02) Setting up store path in /home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02 ...
	I0804 00:33:34.407329   21140 main.go:141] libmachine: (ha-230158-m02) Building disk image from file:///home/jenkins/minikube-integration/19364-3947/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0804 00:33:34.407367   21140 main.go:141] libmachine: (ha-230158-m02) DBG | I0804 00:33:34.407296   21577 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19364-3947/.minikube
	I0804 00:33:34.407443   21140 main.go:141] libmachine: (ha-230158-m02) Downloading /home/jenkins/minikube-integration/19364-3947/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19364-3947/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0804 00:33:34.631738   21140 main.go:141] libmachine: (ha-230158-m02) DBG | I0804 00:33:34.631614   21577 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/id_rsa...
	I0804 00:33:34.980781   21140 main.go:141] libmachine: (ha-230158-m02) DBG | I0804 00:33:34.980506   21577 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/ha-230158-m02.rawdisk...
	I0804 00:33:34.980819   21140 main.go:141] libmachine: (ha-230158-m02) DBG | Writing magic tar header
	I0804 00:33:34.980837   21140 main.go:141] libmachine: (ha-230158-m02) DBG | Writing SSH key tar header
	I0804 00:33:34.981050   21140 main.go:141] libmachine: (ha-230158-m02) DBG | I0804 00:33:34.980961   21577 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02 ...
	I0804 00:33:34.981304   21140 main.go:141] libmachine: (ha-230158-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02
	I0804 00:33:34.981324   21140 main.go:141] libmachine: (ha-230158-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-3947/.minikube/machines
	I0804 00:33:34.981335   21140 main.go:141] libmachine: (ha-230158-m02) Setting executable bit set on /home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02 (perms=drwx------)
	I0804 00:33:34.981346   21140 main.go:141] libmachine: (ha-230158-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-3947/.minikube
	I0804 00:33:34.981361   21140 main.go:141] libmachine: (ha-230158-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-3947
	I0804 00:33:34.981492   21140 main.go:141] libmachine: (ha-230158-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0804 00:33:34.981515   21140 main.go:141] libmachine: (ha-230158-m02) Setting executable bit set on /home/jenkins/minikube-integration/19364-3947/.minikube/machines (perms=drwxr-xr-x)
	I0804 00:33:34.981532   21140 main.go:141] libmachine: (ha-230158-m02) Setting executable bit set on /home/jenkins/minikube-integration/19364-3947/.minikube (perms=drwxr-xr-x)
	I0804 00:33:34.981547   21140 main.go:141] libmachine: (ha-230158-m02) Setting executable bit set on /home/jenkins/minikube-integration/19364-3947 (perms=drwxrwxr-x)
	I0804 00:33:34.981567   21140 main.go:141] libmachine: (ha-230158-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0804 00:33:34.981581   21140 main.go:141] libmachine: (ha-230158-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0804 00:33:34.981593   21140 main.go:141] libmachine: (ha-230158-m02) DBG | Checking permissions on dir: /home/jenkins
	I0804 00:33:34.981608   21140 main.go:141] libmachine: (ha-230158-m02) DBG | Checking permissions on dir: /home
	I0804 00:33:34.981620   21140 main.go:141] libmachine: (ha-230158-m02) DBG | Skipping /home - not owner
	I0804 00:33:34.981647   21140 main.go:141] libmachine: (ha-230158-m02) Creating domain...
	I0804 00:33:34.982529   21140 main.go:141] libmachine: (ha-230158-m02) define libvirt domain using xml: 
	I0804 00:33:34.982547   21140 main.go:141] libmachine: (ha-230158-m02) <domain type='kvm'>
	I0804 00:33:34.982578   21140 main.go:141] libmachine: (ha-230158-m02)   <name>ha-230158-m02</name>
	I0804 00:33:34.982598   21140 main.go:141] libmachine: (ha-230158-m02)   <memory unit='MiB'>2200</memory>
	I0804 00:33:34.982608   21140 main.go:141] libmachine: (ha-230158-m02)   <vcpu>2</vcpu>
	I0804 00:33:34.982615   21140 main.go:141] libmachine: (ha-230158-m02)   <features>
	I0804 00:33:34.982623   21140 main.go:141] libmachine: (ha-230158-m02)     <acpi/>
	I0804 00:33:34.982633   21140 main.go:141] libmachine: (ha-230158-m02)     <apic/>
	I0804 00:33:34.982639   21140 main.go:141] libmachine: (ha-230158-m02)     <pae/>
	I0804 00:33:34.982645   21140 main.go:141] libmachine: (ha-230158-m02)     
	I0804 00:33:34.982653   21140 main.go:141] libmachine: (ha-230158-m02)   </features>
	I0804 00:33:34.982661   21140 main.go:141] libmachine: (ha-230158-m02)   <cpu mode='host-passthrough'>
	I0804 00:33:34.982670   21140 main.go:141] libmachine: (ha-230158-m02)   
	I0804 00:33:34.982676   21140 main.go:141] libmachine: (ha-230158-m02)   </cpu>
	I0804 00:33:34.982686   21140 main.go:141] libmachine: (ha-230158-m02)   <os>
	I0804 00:33:34.982698   21140 main.go:141] libmachine: (ha-230158-m02)     <type>hvm</type>
	I0804 00:33:34.982710   21140 main.go:141] libmachine: (ha-230158-m02)     <boot dev='cdrom'/>
	I0804 00:33:34.982720   21140 main.go:141] libmachine: (ha-230158-m02)     <boot dev='hd'/>
	I0804 00:33:34.982737   21140 main.go:141] libmachine: (ha-230158-m02)     <bootmenu enable='no'/>
	I0804 00:33:34.982771   21140 main.go:141] libmachine: (ha-230158-m02)   </os>
	I0804 00:33:34.982784   21140 main.go:141] libmachine: (ha-230158-m02)   <devices>
	I0804 00:33:34.982795   21140 main.go:141] libmachine: (ha-230158-m02)     <disk type='file' device='cdrom'>
	I0804 00:33:34.982811   21140 main.go:141] libmachine: (ha-230158-m02)       <source file='/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/boot2docker.iso'/>
	I0804 00:33:34.982823   21140 main.go:141] libmachine: (ha-230158-m02)       <target dev='hdc' bus='scsi'/>
	I0804 00:33:34.982835   21140 main.go:141] libmachine: (ha-230158-m02)       <readonly/>
	I0804 00:33:34.982846   21140 main.go:141] libmachine: (ha-230158-m02)     </disk>
	I0804 00:33:34.982860   21140 main.go:141] libmachine: (ha-230158-m02)     <disk type='file' device='disk'>
	I0804 00:33:34.982872   21140 main.go:141] libmachine: (ha-230158-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0804 00:33:34.982888   21140 main.go:141] libmachine: (ha-230158-m02)       <source file='/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/ha-230158-m02.rawdisk'/>
	I0804 00:33:34.982898   21140 main.go:141] libmachine: (ha-230158-m02)       <target dev='hda' bus='virtio'/>
	I0804 00:33:34.982907   21140 main.go:141] libmachine: (ha-230158-m02)     </disk>
	I0804 00:33:34.982922   21140 main.go:141] libmachine: (ha-230158-m02)     <interface type='network'>
	I0804 00:33:34.982941   21140 main.go:141] libmachine: (ha-230158-m02)       <source network='mk-ha-230158'/>
	I0804 00:33:34.982953   21140 main.go:141] libmachine: (ha-230158-m02)       <model type='virtio'/>
	I0804 00:33:34.982965   21140 main.go:141] libmachine: (ha-230158-m02)     </interface>
	I0804 00:33:34.982974   21140 main.go:141] libmachine: (ha-230158-m02)     <interface type='network'>
	I0804 00:33:34.982983   21140 main.go:141] libmachine: (ha-230158-m02)       <source network='default'/>
	I0804 00:33:34.982997   21140 main.go:141] libmachine: (ha-230158-m02)       <model type='virtio'/>
	I0804 00:33:34.983008   21140 main.go:141] libmachine: (ha-230158-m02)     </interface>
	I0804 00:33:34.983018   21140 main.go:141] libmachine: (ha-230158-m02)     <serial type='pty'>
	I0804 00:33:34.983027   21140 main.go:141] libmachine: (ha-230158-m02)       <target port='0'/>
	I0804 00:33:34.983038   21140 main.go:141] libmachine: (ha-230158-m02)     </serial>
	I0804 00:33:34.983046   21140 main.go:141] libmachine: (ha-230158-m02)     <console type='pty'>
	I0804 00:33:34.983057   21140 main.go:141] libmachine: (ha-230158-m02)       <target type='serial' port='0'/>
	I0804 00:33:34.983076   21140 main.go:141] libmachine: (ha-230158-m02)     </console>
	I0804 00:33:34.983091   21140 main.go:141] libmachine: (ha-230158-m02)     <rng model='virtio'>
	I0804 00:33:34.983105   21140 main.go:141] libmachine: (ha-230158-m02)       <backend model='random'>/dev/random</backend>
	I0804 00:33:34.983113   21140 main.go:141] libmachine: (ha-230158-m02)     </rng>
	I0804 00:33:34.983120   21140 main.go:141] libmachine: (ha-230158-m02)     
	I0804 00:33:34.983129   21140 main.go:141] libmachine: (ha-230158-m02)     
	I0804 00:33:34.983138   21140 main.go:141] libmachine: (ha-230158-m02)   </devices>
	I0804 00:33:34.983148   21140 main.go:141] libmachine: (ha-230158-m02) </domain>
	I0804 00:33:34.983158   21140 main.go:141] libmachine: (ha-230158-m02) 
	I0804 00:33:34.989079   21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:cb:b3:b0 in network default
	I0804 00:33:34.989578   21140 main.go:141] libmachine: (ha-230158-m02) Ensuring networks are active...
	I0804 00:33:34.989599   21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:33:34.990268   21140 main.go:141] libmachine: (ha-230158-m02) Ensuring network default is active
	I0804 00:33:34.990644   21140 main.go:141] libmachine: (ha-230158-m02) Ensuring network mk-ha-230158 is active
	I0804 00:33:34.991147   21140 main.go:141] libmachine: (ha-230158-m02) Getting domain xml...
	I0804 00:33:34.991882   21140 main.go:141] libmachine: (ha-230158-m02) Creating domain...
	I0804 00:33:36.236143   21140 main.go:141] libmachine: (ha-230158-m02) Waiting to get IP...
	I0804 00:33:36.236924   21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:33:36.237320   21140 main.go:141] libmachine: (ha-230158-m02) DBG | unable to find current IP address of domain ha-230158-m02 in network mk-ha-230158
	I0804 00:33:36.237365   21140 main.go:141] libmachine: (ha-230158-m02) DBG | I0804 00:33:36.237316   21577 retry.go:31] will retry after 269.343087ms: waiting for machine to come up
	I0804 00:33:36.508842   21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:33:36.509404   21140 main.go:141] libmachine: (ha-230158-m02) DBG | unable to find current IP address of domain ha-230158-m02 in network mk-ha-230158
	I0804 00:33:36.509434   21140 main.go:141] libmachine: (ha-230158-m02) DBG | I0804 00:33:36.509353   21577 retry.go:31] will retry after 320.354ms: waiting for machine to come up
	I0804 00:33:36.830933   21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:33:36.831384   21140 main.go:141] libmachine: (ha-230158-m02) DBG | unable to find current IP address of domain ha-230158-m02 in network mk-ha-230158
	I0804 00:33:36.831405   21140 main.go:141] libmachine: (ha-230158-m02) DBG | I0804 00:33:36.831339   21577 retry.go:31] will retry after 388.826244ms: waiting for machine to come up
	I0804 00:33:37.221810   21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:33:37.222296   21140 main.go:141] libmachine: (ha-230158-m02) DBG | unable to find current IP address of domain ha-230158-m02 in network mk-ha-230158
	I0804 00:33:37.222324   21140 main.go:141] libmachine: (ha-230158-m02) DBG | I0804 00:33:37.222246   21577 retry.go:31] will retry after 438.566018ms: waiting for machine to come up
	I0804 00:33:37.662559   21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:33:37.662923   21140 main.go:141] libmachine: (ha-230158-m02) DBG | unable to find current IP address of domain ha-230158-m02 in network mk-ha-230158
	I0804 00:33:37.662950   21140 main.go:141] libmachine: (ha-230158-m02) DBG | I0804 00:33:37.662888   21577 retry.go:31] will retry after 720.487951ms: waiting for machine to come up
	I0804 00:33:38.384849   21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:33:38.385274   21140 main.go:141] libmachine: (ha-230158-m02) DBG | unable to find current IP address of domain ha-230158-m02 in network mk-ha-230158
	I0804 00:33:38.385296   21140 main.go:141] libmachine: (ha-230158-m02) DBG | I0804 00:33:38.385220   21577 retry.go:31] will retry after 780.198189ms: waiting for machine to come up
	I0804 00:33:39.166800   21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:33:39.167189   21140 main.go:141] libmachine: (ha-230158-m02) DBG | unable to find current IP address of domain ha-230158-m02 in network mk-ha-230158
	I0804 00:33:39.167217   21140 main.go:141] libmachine: (ha-230158-m02) DBG | I0804 00:33:39.167150   21577 retry.go:31] will retry after 1.085150437s: waiting for machine to come up
	I0804 00:33:40.253366   21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:33:40.253781   21140 main.go:141] libmachine: (ha-230158-m02) DBG | unable to find current IP address of domain ha-230158-m02 in network mk-ha-230158
	I0804 00:33:40.253804   21140 main.go:141] libmachine: (ha-230158-m02) DBG | I0804 00:33:40.253737   21577 retry.go:31] will retry after 1.077284779s: waiting for machine to come up
	I0804 00:33:41.332446   21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:33:41.332911   21140 main.go:141] libmachine: (ha-230158-m02) DBG | unable to find current IP address of domain ha-230158-m02 in network mk-ha-230158
	I0804 00:33:41.332940   21140 main.go:141] libmachine: (ha-230158-m02) DBG | I0804 00:33:41.332880   21577 retry.go:31] will retry after 1.445435502s: waiting for machine to come up
	I0804 00:33:42.780433   21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:33:42.780972   21140 main.go:141] libmachine: (ha-230158-m02) DBG | unable to find current IP address of domain ha-230158-m02 in network mk-ha-230158
	I0804 00:33:42.780996   21140 main.go:141] libmachine: (ha-230158-m02) DBG | I0804 00:33:42.780922   21577 retry.go:31] will retry after 2.049802174s: waiting for machine to come up
	I0804 00:33:44.832833   21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:33:44.833350   21140 main.go:141] libmachine: (ha-230158-m02) DBG | unable to find current IP address of domain ha-230158-m02 in network mk-ha-230158
	I0804 00:33:44.833376   21140 main.go:141] libmachine: (ha-230158-m02) DBG | I0804 00:33:44.833310   21577 retry.go:31] will retry after 2.47727833s: waiting for machine to come up
	I0804 00:33:47.313965   21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:33:47.314559   21140 main.go:141] libmachine: (ha-230158-m02) DBG | unable to find current IP address of domain ha-230158-m02 in network mk-ha-230158
	I0804 00:33:47.314586   21140 main.go:141] libmachine: (ha-230158-m02) DBG | I0804 00:33:47.314517   21577 retry.go:31] will retry after 2.252609164s: waiting for machine to come up
	I0804 00:33:49.568155   21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:33:49.568430   21140 main.go:141] libmachine: (ha-230158-m02) DBG | unable to find current IP address of domain ha-230158-m02 in network mk-ha-230158
	I0804 00:33:49.568451   21140 main.go:141] libmachine: (ha-230158-m02) DBG | I0804 00:33:49.568403   21577 retry.go:31] will retry after 3.504934561s: waiting for machine to come up
	I0804 00:33:53.075350   21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:33:53.075829   21140 main.go:141] libmachine: (ha-230158-m02) DBG | unable to find current IP address of domain ha-230158-m02 in network mk-ha-230158
	I0804 00:33:53.075873   21140 main.go:141] libmachine: (ha-230158-m02) DBG | I0804 00:33:53.075769   21577 retry.go:31] will retry after 3.894784936s: waiting for machine to come up
	I0804 00:33:56.974127   21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:33:56.974614   21140 main.go:141] libmachine: (ha-230158-m02) Found IP for machine: 192.168.39.188
	I0804 00:33:56.974638   21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has current primary IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:33:56.974645   21140 main.go:141] libmachine: (ha-230158-m02) Reserving static IP address...
	I0804 00:33:56.975086   21140 main.go:141] libmachine: (ha-230158-m02) DBG | unable to find host DHCP lease matching {name: "ha-230158-m02", mac: "52:54:00:18:6b:a7", ip: "192.168.39.188"} in network mk-ha-230158
	I0804 00:33:57.044305   21140 main.go:141] libmachine: (ha-230158-m02) DBG | Getting to WaitForSSH function...
	I0804 00:33:57.044336   21140 main.go:141] libmachine: (ha-230158-m02) Reserved static IP address: 192.168.39.188
	I0804 00:33:57.044373   21140 main.go:141] libmachine: (ha-230158-m02) Waiting for SSH to be available...
	I0804 00:33:57.046724   21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:33:57.047034   21140 main.go:141] libmachine: (ha-230158-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158
	I0804 00:33:57.047057   21140 main.go:141] libmachine: (ha-230158-m02) DBG | unable to find defined IP address of network mk-ha-230158 interface with MAC address 52:54:00:18:6b:a7
	I0804 00:33:57.047178   21140 main.go:141] libmachine: (ha-230158-m02) DBG | Using SSH client type: external
	I0804 00:33:57.047200   21140 main.go:141] libmachine: (ha-230158-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/id_rsa (-rw-------)
	I0804 00:33:57.047262   21140 main.go:141] libmachine: (ha-230158-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0804 00:33:57.047282   21140 main.go:141] libmachine: (ha-230158-m02) DBG | About to run SSH command:
	I0804 00:33:57.047299   21140 main.go:141] libmachine: (ha-230158-m02) DBG | exit 0
	I0804 00:33:57.050685   21140 main.go:141] libmachine: (ha-230158-m02) DBG | SSH cmd err, output: exit status 255: 
	I0804 00:33:57.050701   21140 main.go:141] libmachine: (ha-230158-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0804 00:33:57.050709   21140 main.go:141] libmachine: (ha-230158-m02) DBG | command : exit 0
	I0804 00:33:57.050718   21140 main.go:141] libmachine: (ha-230158-m02) DBG | err     : exit status 255
	I0804 00:33:57.050726   21140 main.go:141] libmachine: (ha-230158-m02) DBG | output  : 
	I0804 00:34:00.050918   21140 main.go:141] libmachine: (ha-230158-m02) DBG | Getting to WaitForSSH function...
	I0804 00:34:00.053466   21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:34:00.053948   21140 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
	I0804 00:34:00.053978   21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:34:00.054097   21140 main.go:141] libmachine: (ha-230158-m02) DBG | Using SSH client type: external
	I0804 00:34:00.054121   21140 main.go:141] libmachine: (ha-230158-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/id_rsa (-rw-------)
	I0804 00:34:00.054149   21140 main.go:141] libmachine: (ha-230158-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.188 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0804 00:34:00.054166   21140 main.go:141] libmachine: (ha-230158-m02) DBG | About to run SSH command:
	I0804 00:34:00.054181   21140 main.go:141] libmachine: (ha-230158-m02) DBG | exit 0
	I0804 00:34:00.182691   21140 main.go:141] libmachine: (ha-230158-m02) DBG | SSH cmd err, output: <nil>: 
	I0804 00:34:00.182940   21140 main.go:141] libmachine: (ha-230158-m02) KVM machine creation complete!
	I0804 00:34:00.183237   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetConfigRaw
	I0804 00:34:00.183772   21140 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
	I0804 00:34:00.183934   21140 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
	I0804 00:34:00.184119   21140 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0804 00:34:00.184135   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetState
	I0804 00:34:00.185402   21140 main.go:141] libmachine: Detecting operating system of created instance...
	I0804 00:34:00.185417   21140 main.go:141] libmachine: Waiting for SSH to be available...
	I0804 00:34:00.185422   21140 main.go:141] libmachine: Getting to WaitForSSH function...
	I0804 00:34:00.185427   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
	I0804 00:34:00.187754   21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:34:00.188163   21140 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
	I0804 00:34:00.188187   21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:34:00.188355   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
	I0804 00:34:00.188540   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
	I0804 00:34:00.188694   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
	I0804 00:34:00.188851   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
	I0804 00:34:00.189011   21140 main.go:141] libmachine: Using SSH client type: native
	I0804 00:34:00.189258   21140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I0804 00:34:00.189270   21140 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0804 00:34:00.297314   21140 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 00:34:00.297337   21140 main.go:141] libmachine: Detecting the provisioner...
	I0804 00:34:00.297347   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
	I0804 00:34:00.300140   21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:34:00.300503   21140 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
	I0804 00:34:00.300548   21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:34:00.300706   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
	I0804 00:34:00.300893   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
	I0804 00:34:00.301033   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
	I0804 00:34:00.301147   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
	I0804 00:34:00.301331   21140 main.go:141] libmachine: Using SSH client type: native
	I0804 00:34:00.301509   21140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I0804 00:34:00.301522   21140 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0804 00:34:00.410856   21140 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0804 00:34:00.410918   21140 main.go:141] libmachine: found compatible host: buildroot
	I0804 00:34:00.410928   21140 main.go:141] libmachine: Provisioning with buildroot...
	I0804 00:34:00.410938   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetMachineName
	I0804 00:34:00.411200   21140 buildroot.go:166] provisioning hostname "ha-230158-m02"
	I0804 00:34:00.411222   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetMachineName
	I0804 00:34:00.411396   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
	I0804 00:34:00.413932   21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:34:00.414334   21140 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
	I0804 00:34:00.414361   21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:34:00.414483   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
	I0804 00:34:00.414639   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
	I0804 00:34:00.414750   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
	I0804 00:34:00.414866   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
	I0804 00:34:00.415013   21140 main.go:141] libmachine: Using SSH client type: native
	I0804 00:34:00.415182   21140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I0804 00:34:00.415200   21140 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-230158-m02 && echo "ha-230158-m02" | sudo tee /etc/hostname
	I0804 00:34:00.540909   21140 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-230158-m02
	
	I0804 00:34:00.540938   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
	I0804 00:34:00.543874   21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:34:00.544239   21140 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
	I0804 00:34:00.544284   21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:34:00.544450   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
	I0804 00:34:00.544648   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
	I0804 00:34:00.544834   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
	I0804 00:34:00.544976   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
	I0804 00:34:00.545131   21140 main.go:141] libmachine: Using SSH client type: native
	I0804 00:34:00.545314   21140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I0804 00:34:00.545335   21140 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-230158-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-230158-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-230158-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 00:34:00.667251   21140 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 00:34:00.667277   21140 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-3947/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-3947/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-3947/.minikube}
	I0804 00:34:00.667302   21140 buildroot.go:174] setting up certificates
	I0804 00:34:00.667311   21140 provision.go:84] configureAuth start
	I0804 00:34:00.667320   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetMachineName
	I0804 00:34:00.667577   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetIP
	I0804 00:34:00.669910   21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:34:00.670300   21140 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
	I0804 00:34:00.670323   21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:34:00.670468   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
	I0804 00:34:00.672709   21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:34:00.673007   21140 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
	I0804 00:34:00.673036   21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:34:00.673138   21140 provision.go:143] copyHostCerts
	I0804 00:34:00.673166   21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19364-3947/.minikube/ca.pem
	I0804 00:34:00.673200   21140 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-3947/.minikube/ca.pem, removing ...
	I0804 00:34:00.673211   21140 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-3947/.minikube/ca.pem
	I0804 00:34:00.673280   21140 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-3947/.minikube/ca.pem (1082 bytes)
	I0804 00:34:00.673350   21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19364-3947/.minikube/cert.pem
	I0804 00:34:00.673368   21140 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-3947/.minikube/cert.pem, removing ...
	I0804 00:34:00.673372   21140 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-3947/.minikube/cert.pem
	I0804 00:34:00.673397   21140 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-3947/.minikube/cert.pem (1123 bytes)
	I0804 00:34:00.673438   21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19364-3947/.minikube/key.pem
	I0804 00:34:00.673454   21140 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-3947/.minikube/key.pem, removing ...
	I0804 00:34:00.673458   21140 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-3947/.minikube/key.pem
	I0804 00:34:00.673478   21140 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-3947/.minikube/key.pem (1679 bytes)
	I0804 00:34:00.673525   21140 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-3947/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca-key.pem org=jenkins.ha-230158-m02 san=[127.0.0.1 192.168.39.188 ha-230158-m02 localhost minikube]
	I0804 00:34:00.778280   21140 provision.go:177] copyRemoteCerts
	I0804 00:34:00.778327   21140 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 00:34:00.778346   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
	I0804 00:34:00.780655   21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:34:00.780960   21140 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
	I0804 00:34:00.780989   21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:34:00.781148   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
	I0804 00:34:00.781336   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
	I0804 00:34:00.781476   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
	I0804 00:34:00.781598   21140 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/id_rsa Username:docker}
	I0804 00:34:00.868546   21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0804 00:34:00.868625   21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0804 00:34:00.892433   21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0804 00:34:00.892507   21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0804 00:34:00.915531   21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0804 00:34:00.915587   21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 00:34:00.939019   21140 provision.go:87] duration metric: took 271.698597ms to configureAuth
	I0804 00:34:00.939042   21140 buildroot.go:189] setting minikube options for container-runtime
	I0804 00:34:00.939230   21140 config.go:182] Loaded profile config "ha-230158": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0804 00:34:00.939254   21140 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
	I0804 00:34:00.939559   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
	I0804 00:34:00.941901   21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:34:00.942307   21140 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
	I0804 00:34:00.942327   21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:34:00.942459   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
	I0804 00:34:00.942649   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
	I0804 00:34:00.942819   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
	I0804 00:34:00.942985   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
	I0804 00:34:00.943135   21140 main.go:141] libmachine: Using SSH client type: native
	I0804 00:34:00.943305   21140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I0804 00:34:00.943318   21140 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0804 00:34:01.055739   21140 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0804 00:34:01.055766   21140 buildroot.go:70] root file system type: tmpfs
	I0804 00:34:01.055918   21140 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0804 00:34:01.055942   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
	I0804 00:34:01.058621   21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:34:01.058973   21140 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
	I0804 00:34:01.059001   21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:34:01.059203   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
	I0804 00:34:01.059366   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
	I0804 00:34:01.059560   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
	I0804 00:34:01.059712   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
	I0804 00:34:01.059898   21140 main.go:141] libmachine: Using SSH client type: native
	I0804 00:34:01.060107   21140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I0804 00:34:01.060200   21140 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.39.132"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0804 00:34:01.187920   21140 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.39.132
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0804 00:34:01.187950   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
	I0804 00:34:01.190605   21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:34:01.190996   21140 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
	I0804 00:34:01.191028   21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:34:01.191200   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
	I0804 00:34:01.191425   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
	I0804 00:34:01.191586   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
	I0804 00:34:01.191762   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
	I0804 00:34:01.191931   21140 main.go:141] libmachine: Using SSH client type: native
	I0804 00:34:01.192109   21140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I0804 00:34:01.192133   21140 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0804 00:34:02.973852   21140 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0804 00:34:02.973882   21140 main.go:141] libmachine: Checking connection to Docker...
	I0804 00:34:02.973895   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetURL
	I0804 00:34:02.975180   21140 main.go:141] libmachine: (ha-230158-m02) DBG | Using libvirt version 6000000
	I0804 00:34:02.977545   21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:34:02.977879   21140 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
	I0804 00:34:02.977907   21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:34:02.978011   21140 main.go:141] libmachine: Docker is up and running!
	I0804 00:34:02.978027   21140 main.go:141] libmachine: Reticulating splines...
	I0804 00:34:02.978035   21140 client.go:171] duration metric: took 28.574452434s to LocalClient.Create
	I0804 00:34:02.978058   21140 start.go:167] duration metric: took 28.574514618s to libmachine.API.Create "ha-230158"
	I0804 00:34:02.978070   21140 start.go:293] postStartSetup for "ha-230158-m02" (driver="kvm2")
	I0804 00:34:02.978078   21140 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 00:34:02.978101   21140 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
	I0804 00:34:02.978341   21140 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 00:34:02.978382   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
	I0804 00:34:02.980444   21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:34:02.980724   21140 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
	I0804 00:34:02.980741   21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:34:02.980855   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
	I0804 00:34:02.981022   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
	I0804 00:34:02.981194   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
	I0804 00:34:02.981362   21140 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/id_rsa Username:docker}
	I0804 00:34:03.065731   21140 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 00:34:03.070115   21140 info.go:137] Remote host: Buildroot 2023.02.9
	I0804 00:34:03.070134   21140 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-3947/.minikube/addons for local assets ...
	I0804 00:34:03.070199   21140 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-3947/.minikube/files for local assets ...
	I0804 00:34:03.070312   21140 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-3947/.minikube/files/etc/ssl/certs/111362.pem -> 111362.pem in /etc/ssl/certs
	I0804 00:34:03.070326   21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/files/etc/ssl/certs/111362.pem -> /etc/ssl/certs/111362.pem
	I0804 00:34:03.070430   21140 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0804 00:34:03.080885   21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/files/etc/ssl/certs/111362.pem --> /etc/ssl/certs/111362.pem (1708 bytes)
	I0804 00:34:03.103941   21140 start.go:296] duration metric: took 125.859795ms for postStartSetup
	I0804 00:34:03.104002   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetConfigRaw
	I0804 00:34:03.104596   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetIP
	I0804 00:34:03.107330   21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:34:03.107729   21140 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
	I0804 00:34:03.107756   21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:34:03.107958   21140 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/config.json ...
	I0804 00:34:03.108164   21140 start.go:128] duration metric: took 28.721688077s to createHost
	I0804 00:34:03.108189   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
	I0804 00:34:03.110106   21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:34:03.110474   21140 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
	I0804 00:34:03.110499   21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:34:03.110753   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
	I0804 00:34:03.110929   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
	I0804 00:34:03.111096   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
	I0804 00:34:03.111208   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
	I0804 00:34:03.111337   21140 main.go:141] libmachine: Using SSH client type: native
	I0804 00:34:03.111506   21140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I0804 00:34:03.111516   21140 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0804 00:34:03.222970   21140 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722731643.203343116
	
	I0804 00:34:03.222994   21140 fix.go:216] guest clock: 1722731643.203343116
	I0804 00:34:03.223005   21140 fix.go:229] Guest: 2024-08-04 00:34:03.203343116 +0000 UTC Remote: 2024-08-04 00:34:03.108175533 +0000 UTC m=+92.285257944 (delta=95.167583ms)
	I0804 00:34:03.223029   21140 fix.go:200] guest clock delta is within tolerance: 95.167583ms
	I0804 00:34:03.223037   21140 start.go:83] releasing machines lock for "ha-230158-m02", held for 28.836660328s
	I0804 00:34:03.223063   21140 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
	I0804 00:34:03.223345   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetIP
	I0804 00:34:03.225993   21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:34:03.226329   21140 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
	I0804 00:34:03.226351   21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:34:03.228712   21140 out.go:177] * Found network options:
	I0804 00:34:03.230042   21140 out.go:177]   - NO_PROXY=192.168.39.132
	W0804 00:34:03.231182   21140 proxy.go:119] fail to check proxy env: Error ip not in block
	I0804 00:34:03.231221   21140 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
	I0804 00:34:03.231663   21140 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
	I0804 00:34:03.231834   21140 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
	I0804 00:34:03.231905   21140 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 00:34:03.231944   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
	W0804 00:34:03.232042   21140 proxy.go:119] fail to check proxy env: Error ip not in block
	I0804 00:34:03.232120   21140 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0804 00:34:03.232140   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
	I0804 00:34:03.234608   21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:34:03.234862   21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:34:03.234991   21140 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
	I0804 00:34:03.235018   21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:34:03.235123   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
	I0804 00:34:03.235302   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
	I0804 00:34:03.235311   21140 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
	I0804 00:34:03.235330   21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:34:03.235524   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
	I0804 00:34:03.235544   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
	I0804 00:34:03.235694   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
	I0804 00:34:03.235692   21140 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/id_rsa Username:docker}
	I0804 00:34:03.235836   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
	I0804 00:34:03.235962   21140 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/id_rsa Username:docker}
	W0804 00:34:03.316348   21140 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0804 00:34:03.316420   21140 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 00:34:03.338803   21140 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0804 00:34:03.338828   21140 start.go:495] detecting cgroup driver to use...
	I0804 00:34:03.338936   21140 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 00:34:03.358068   21140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0804 00:34:03.368777   21140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0804 00:34:03.379245   21140 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0804 00:34:03.379303   21140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0804 00:34:03.389867   21140 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0804 00:34:03.400270   21140 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0804 00:34:03.411270   21140 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0804 00:34:03.421718   21140 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 00:34:03.432972   21140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0804 00:34:03.443789   21140 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0804 00:34:03.454923   21140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0804 00:34:03.465482   21140 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 00:34:03.475350   21140 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 00:34:03.485285   21140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:34:03.590505   21140 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0804 00:34:03.615665   21140 start.go:495] detecting cgroup driver to use...
	I0804 00:34:03.615750   21140 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0804 00:34:03.631563   21140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0804 00:34:03.647428   21140 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0804 00:34:03.663904   21140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0804 00:34:03.677259   21140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0804 00:34:03.689907   21140 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0804 00:34:03.721179   21140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0804 00:34:03.735231   21140 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 00:34:03.753269   21140 ssh_runner.go:195] Run: which cri-dockerd
	I0804 00:34:03.757177   21140 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0804 00:34:03.767005   21140 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0804 00:34:03.783229   21140 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0804 00:34:03.901393   21140 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0804 00:34:04.027419   21140 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0804 00:34:04.027457   21140 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0804 00:34:04.044350   21140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:34:04.154078   21140 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0804 00:34:06.510775   21140 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.35665692s)
	I0804 00:34:06.510853   21140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0804 00:34:06.524398   21140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0804 00:34:06.536855   21140 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0804 00:34:06.642364   21140 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0804 00:34:06.763061   21140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:34:06.881512   21140 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0804 00:34:06.899056   21140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0804 00:34:06.912162   21140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:34:07.033135   21140 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0804 00:34:07.111821   21140 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0804 00:34:07.111882   21140 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0804 00:34:07.117377   21140 start.go:563] Will wait 60s for crictl version
	I0804 00:34:07.117436   21140 ssh_runner.go:195] Run: which crictl
	I0804 00:34:07.122834   21140 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 00:34:07.159702   21140 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0804 00:34:07.159774   21140 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0804 00:34:07.184991   21140 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0804 00:34:07.211671   21140 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0804 00:34:07.212904   21140 out.go:177]   - env NO_PROXY=192.168.39.132
	I0804 00:34:07.214403   21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetIP
	I0804 00:34:07.217472   21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:34:07.217944   21140 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
	I0804 00:34:07.217971   21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
	I0804 00:34:07.218194   21140 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0804 00:34:07.222220   21140 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:34:07.235640   21140 mustload.go:65] Loading cluster: ha-230158
	I0804 00:34:07.235853   21140 config.go:182] Loaded profile config "ha-230158": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0804 00:34:07.236199   21140 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:34:07.236242   21140 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:34:07.250786   21140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39771
	I0804 00:34:07.251342   21140 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:34:07.251781   21140 main.go:141] libmachine: Using API Version  1
	I0804 00:34:07.251801   21140 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:34:07.252086   21140 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:34:07.252243   21140 main.go:141] libmachine: (ha-230158) Calling .GetState
	I0804 00:34:07.253628   21140 host.go:66] Checking if "ha-230158" exists ...
	I0804 00:34:07.253914   21140 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:34:07.253948   21140 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:34:07.267875   21140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36351
	I0804 00:34:07.268286   21140 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:34:07.268718   21140 main.go:141] libmachine: Using API Version  1
	I0804 00:34:07.268736   21140 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:34:07.269035   21140 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:34:07.269319   21140 main.go:141] libmachine: (ha-230158) Calling .DriverName
	I0804 00:34:07.269532   21140 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158 for IP: 192.168.39.188
	I0804 00:34:07.269544   21140 certs.go:194] generating shared ca certs ...
	I0804 00:34:07.269559   21140 certs.go:226] acquiring lock for ca certs: {Name:mkffa482a260ec35b4e7e61a9f84c11349615c10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:34:07.269670   21140 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-3947/.minikube/ca.key
	I0804 00:34:07.269708   21140 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-3947/.minikube/proxy-client-ca.key
	I0804 00:34:07.269717   21140 certs.go:256] generating profile certs ...
	I0804 00:34:07.269774   21140 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/client.key
	I0804 00:34:07.269798   21140 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.key.614132ed
	I0804 00:34:07.269812   21140 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.crt.614132ed with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.132 192.168.39.188 192.168.39.254]
	I0804 00:34:07.479685   21140 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.crt.614132ed ...
	I0804 00:34:07.479713   21140 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.crt.614132ed: {Name:mk4942c0828754fe87b4343b4543d452f5279ad4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:34:07.479872   21140 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.key.614132ed ...
	I0804 00:34:07.479885   21140 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.key.614132ed: {Name:mk7d37b9013df8b64903584b8f3e87686cf52657 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:34:07.479961   21140 certs.go:381] copying /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.crt.614132ed -> /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.crt
	I0804 00:34:07.480095   21140 certs.go:385] copying /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.key.614132ed -> /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.key
	I0804 00:34:07.480217   21140 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/proxy-client.key
	I0804 00:34:07.480230   21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0804 00:34:07.480248   21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0804 00:34:07.480261   21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0804 00:34:07.480274   21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0804 00:34:07.480286   21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0804 00:34:07.480298   21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0804 00:34:07.480310   21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0804 00:34:07.480322   21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0804 00:34:07.480364   21140 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/11136.pem (1338 bytes)
	W0804 00:34:07.480392   21140 certs.go:480] ignoring /home/jenkins/minikube-integration/19364-3947/.minikube/certs/11136_empty.pem, impossibly tiny 0 bytes
	I0804 00:34:07.480402   21140 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca-key.pem (1679 bytes)
	I0804 00:34:07.480422   21140 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem (1082 bytes)
	I0804 00:34:07.480441   21140 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/cert.pem (1123 bytes)
	I0804 00:34:07.480462   21140 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/key.pem (1679 bytes)
	I0804 00:34:07.480497   21140 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-3947/.minikube/files/etc/ssl/certs/111362.pem (1708 bytes)
	I0804 00:34:07.480523   21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/files/etc/ssl/certs/111362.pem -> /usr/share/ca-certificates/111362.pem
	I0804 00:34:07.480537   21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:34:07.480549   21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/11136.pem -> /usr/share/ca-certificates/11136.pem
	I0804 00:34:07.480578   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHHostname
	I0804 00:34:07.483540   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:34:07.483941   21140 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
	I0804 00:34:07.483967   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:34:07.484158   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHPort
	I0804 00:34:07.484383   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
	I0804 00:34:07.484570   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHUsername
	I0804 00:34:07.484734   21140 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158/id_rsa Username:docker}
	I0804 00:34:07.558603   21140 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0804 00:34:07.563671   21140 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0804 00:34:07.575237   21140 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0804 00:34:07.579302   21140 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0804 00:34:07.590134   21140 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0804 00:34:07.594449   21140 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0804 00:34:07.606285   21140 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0804 00:34:07.610012   21140 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0804 00:34:07.620077   21140 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0804 00:34:07.624507   21140 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0804 00:34:07.634464   21140 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0804 00:34:07.638732   21140 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0804 00:34:07.651163   21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 00:34:07.675443   21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0804 00:34:07.697641   21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 00:34:07.720620   21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0804 00:34:07.743358   21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0804 00:34:07.766199   21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0804 00:34:07.789338   21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 00:34:07.812594   21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0804 00:34:07.835867   21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/files/etc/ssl/certs/111362.pem --> /usr/share/ca-certificates/111362.pem (1708 bytes)
	I0804 00:34:07.858903   21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 00:34:07.881640   21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/certs/11136.pem --> /usr/share/ca-certificates/11136.pem (1338 bytes)
	I0804 00:34:07.904313   21140 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0804 00:34:07.921094   21140 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0804 00:34:07.937606   21140 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0804 00:34:07.953663   21140 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0804 00:34:07.970041   21140 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0804 00:34:07.986209   21140 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0804 00:34:08.002865   21140 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0804 00:34:08.021694   21140 ssh_runner.go:195] Run: openssl version
	I0804 00:34:08.027587   21140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 00:34:08.038709   21140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:34:08.043324   21140 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  4 00:21 /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:34:08.043385   21140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:34:08.049038   21140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 00:34:08.059481   21140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11136.pem && ln -fs /usr/share/ca-certificates/11136.pem /etc/ssl/certs/11136.pem"
	I0804 00:34:08.070091   21140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11136.pem
	I0804 00:34:08.074494   21140 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  4 00:28 /usr/share/ca-certificates/11136.pem
	I0804 00:34:08.074533   21140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11136.pem
	I0804 00:34:08.079883   21140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11136.pem /etc/ssl/certs/51391683.0"
	I0804 00:34:08.090363   21140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111362.pem && ln -fs /usr/share/ca-certificates/111362.pem /etc/ssl/certs/111362.pem"
	I0804 00:34:08.100615   21140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111362.pem
	I0804 00:34:08.104909   21140 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  4 00:28 /usr/share/ca-certificates/111362.pem
	I0804 00:34:08.104945   21140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111362.pem
	I0804 00:34:08.110433   21140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111362.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 00:34:08.120574   21140 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 00:34:08.124582   21140 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0804 00:34:08.124647   21140 kubeadm.go:934] updating node {m02 192.168.39.188 8443 v1.30.3 docker true true} ...
	I0804 00:34:08.124753   21140 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-230158-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.188
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-230158 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 00:34:08.124780   21140 kube-vip.go:115] generating kube-vip config ...
	I0804 00:34:08.124820   21140 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0804 00:34:08.139786   21140 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0804 00:34:08.139846   21140 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0804 00:34:08.139898   21140 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0804 00:34:08.149586   21140 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0804 00:34:08.149628   21140 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0804 00:34:08.158592   21140 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0804 00:34:08.158616   21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0804 00:34:08.158661   21140 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19364-3947/.minikube/cache/linux/amd64/v1.30.3/kubelet
	I0804 00:34:08.158676   21140 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19364-3947/.minikube/cache/linux/amd64/v1.30.3/kubeadm
	I0804 00:34:08.158681   21140 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0804 00:34:08.163476   21140 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0804 00:34:08.163498   21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0804 00:34:10.501737   21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0804 00:34:10.501822   21140 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0804 00:34:10.506769   21140 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0804 00:34:10.506799   21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0804 00:34:12.405648   21140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:34:12.420742   21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0804 00:34:12.420837   21140 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0804 00:34:12.425182   21140 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0804 00:34:12.425209   21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0804 00:34:12.830358   21140 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0804 00:34:12.839608   21140 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0804 00:34:12.856242   21140 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0804 00:34:12.872578   21140 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0804 00:34:12.888266   21140 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0804 00:34:12.891912   21140 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:34:12.903392   21140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:34:13.018858   21140 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:34:13.039089   21140 host.go:66] Checking if "ha-230158" exists ...
	I0804 00:34:13.039397   21140 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:34:13.039432   21140 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:34:13.053963   21140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40565
	I0804 00:34:13.054374   21140 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:34:13.054851   21140 main.go:141] libmachine: Using API Version  1
	I0804 00:34:13.054873   21140 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:34:13.055241   21140 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:34:13.055427   21140 main.go:141] libmachine: (ha-230158) Calling .DriverName
	I0804 00:34:13.055575   21140 start.go:317] joinCluster: &{Name:ha-230158 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-230158 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.132 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.188 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExp
iration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:34:13.055718   21140 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0804 00:34:13.055739   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHHostname
	I0804 00:34:13.058643   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:34:13.059080   21140 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
	I0804 00:34:13.059107   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:34:13.059316   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHPort
	I0804 00:34:13.059529   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
	I0804 00:34:13.059699   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHUsername
	I0804 00:34:13.060688   21140 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158/id_rsa Username:docker}
	I0804 00:34:13.242684   21140 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.188 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0804 00:34:13.242729   21140 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9f5oe8.e28x00q0ngfisul1 --discovery-token-ca-cert-hash sha256:df45234da77be7664dbc18ef5748e1cb4d47aa47bd0026b7b7a8eef37767d0f0 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-230158-m02 --control-plane --apiserver-advertise-address=192.168.39.188 --apiserver-bind-port=8443"
	I0804 00:34:35.004730   21140 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9f5oe8.e28x00q0ngfisul1 --discovery-token-ca-cert-hash sha256:df45234da77be7664dbc18ef5748e1cb4d47aa47bd0026b7b7a8eef37767d0f0 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-230158-m02 --control-plane --apiserver-advertise-address=192.168.39.188 --apiserver-bind-port=8443": (21.761977329s)
	I0804 00:34:35.004764   21140 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0804 00:34:35.554008   21140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-230158-m02 minikube.k8s.io/updated_at=2024_08_04T00_34_35_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=40edfa2c305b426121bc436e9f593c10662235e6 minikube.k8s.io/name=ha-230158 minikube.k8s.io/primary=false
	I0804 00:34:35.691757   21140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-230158-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0804 00:34:35.818745   21140 start.go:319] duration metric: took 22.763168526s to joinCluster
	I0804 00:34:35.818822   21140 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.188 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0804 00:34:35.819150   21140 config.go:182] Loaded profile config "ha-230158": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0804 00:34:35.820459   21140 out.go:177] * Verifying Kubernetes components...
	I0804 00:34:35.821769   21140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:34:36.156667   21140 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:34:36.198514   21140 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19364-3947/kubeconfig
	I0804 00:34:36.198830   21140 kapi.go:59] client config for ha-230158: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/client.crt", KeyFile:"/home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/client.key", CAFile:"/home/jenkins/minikube-integration/19364-3947/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0804 00:34:36.198909   21140 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.132:8443
	I0804 00:34:36.199154   21140 node_ready.go:35] waiting up to 6m0s for node "ha-230158-m02" to be "Ready" ...
	I0804 00:34:36.199257   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
	I0804 00:34:36.199267   21140 round_trippers.go:469] Request Headers:
	I0804 00:34:36.199277   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:34:36.199282   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:34:36.226564   21140 round_trippers.go:574] Response Status: 200 OK in 27 milliseconds
	I0804 00:34:36.700098   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
	I0804 00:34:36.700120   21140 round_trippers.go:469] Request Headers:
	I0804 00:34:36.700131   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:34:36.700138   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:34:36.711324   21140 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0804 00:34:37.200254   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
	I0804 00:34:37.200278   21140 round_trippers.go:469] Request Headers:
	I0804 00:34:37.200290   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:34:37.200298   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:34:37.204598   21140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0804 00:34:37.699445   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
	I0804 00:34:37.699467   21140 round_trippers.go:469] Request Headers:
	I0804 00:34:37.699482   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:34:37.699488   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:34:37.703558   21140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0804 00:34:38.199900   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
	I0804 00:34:38.199917   21140 round_trippers.go:469] Request Headers:
	I0804 00:34:38.199926   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:34:38.199930   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:34:38.203357   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:34:38.204075   21140 node_ready.go:53] node "ha-230158-m02" has status "Ready":"False"
	I0804 00:34:38.699358   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
	I0804 00:34:38.699381   21140 round_trippers.go:469] Request Headers:
	I0804 00:34:38.699388   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:34:38.699392   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:34:38.702677   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:34:39.199615   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
	I0804 00:34:39.199641   21140 round_trippers.go:469] Request Headers:
	I0804 00:34:39.199649   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:34:39.199653   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:34:39.206263   21140 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0804 00:34:39.700084   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
	I0804 00:34:39.700108   21140 round_trippers.go:469] Request Headers:
	I0804 00:34:39.700116   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:34:39.700121   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:34:39.704339   21140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0804 00:34:40.199338   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
	I0804 00:34:40.199364   21140 round_trippers.go:469] Request Headers:
	I0804 00:34:40.199375   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:34:40.199383   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:34:40.202609   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:34:40.699568   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
	I0804 00:34:40.699589   21140 round_trippers.go:469] Request Headers:
	I0804 00:34:40.699597   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:34:40.699600   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:34:40.702632   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:34:40.703209   21140 node_ready.go:53] node "ha-230158-m02" has status "Ready":"False"
	I0804 00:34:41.199424   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
	I0804 00:34:41.199454   21140 round_trippers.go:469] Request Headers:
	I0804 00:34:41.199463   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:34:41.199467   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:34:41.202612   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:34:41.699623   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
	I0804 00:34:41.699643   21140 round_trippers.go:469] Request Headers:
	I0804 00:34:41.699651   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:34:41.699656   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:34:41.702876   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:34:42.200058   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
	I0804 00:34:42.200077   21140 round_trippers.go:469] Request Headers:
	I0804 00:34:42.200085   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:34:42.200088   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:34:42.204367   21140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0804 00:34:42.699492   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
	I0804 00:34:42.699525   21140 round_trippers.go:469] Request Headers:
	I0804 00:34:42.699536   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:34:42.699540   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:34:42.702416   21140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0804 00:34:43.200086   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
	I0804 00:34:43.200111   21140 round_trippers.go:469] Request Headers:
	I0804 00:34:43.200123   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:34:43.200129   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:34:43.204006   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:34:43.204729   21140 node_ready.go:53] node "ha-230158-m02" has status "Ready":"False"
	I0804 00:34:43.700249   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
	I0804 00:34:43.700271   21140 round_trippers.go:469] Request Headers:
	I0804 00:34:43.700278   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:34:43.700281   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:34:43.703414   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:34:44.199360   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
	I0804 00:34:44.199384   21140 round_trippers.go:469] Request Headers:
	I0804 00:34:44.199394   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:34:44.199399   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:34:44.202381   21140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0804 00:34:44.699980   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
	I0804 00:34:44.699999   21140 round_trippers.go:469] Request Headers:
	I0804 00:34:44.700007   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:34:44.700011   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:34:44.702991   21140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0804 00:34:45.200017   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
	I0804 00:34:45.200039   21140 round_trippers.go:469] Request Headers:
	I0804 00:34:45.200046   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:34:45.200051   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:34:45.203656   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:34:45.700015   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
	I0804 00:34:45.700037   21140 round_trippers.go:469] Request Headers:
	I0804 00:34:45.700047   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:34:45.700052   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:34:45.702590   21140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0804 00:34:45.703419   21140 node_ready.go:53] node "ha-230158-m02" has status "Ready":"False"
	I0804 00:34:46.199304   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
	I0804 00:34:46.199326   21140 round_trippers.go:469] Request Headers:
	I0804 00:34:46.199335   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:34:46.199340   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:34:46.202195   21140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0804 00:34:46.699359   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
	I0804 00:34:46.699385   21140 round_trippers.go:469] Request Headers:
	I0804 00:34:46.699397   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:34:46.699403   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:34:46.702123   21140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0804 00:34:47.200099   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
	I0804 00:34:47.200121   21140 round_trippers.go:469] Request Headers:
	I0804 00:34:47.200127   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:34:47.200131   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:34:47.203724   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:34:47.699403   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
	I0804 00:34:47.699425   21140 round_trippers.go:469] Request Headers:
	I0804 00:34:47.699435   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:34:47.699439   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:34:47.703523   21140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0804 00:34:47.704523   21140 node_ready.go:53] node "ha-230158-m02" has status "Ready":"False"
	I0804 00:34:48.199970   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
	I0804 00:34:48.199991   21140 round_trippers.go:469] Request Headers:
	I0804 00:34:48.199998   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:34:48.200001   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:34:48.204184   21140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0804 00:34:48.699350   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
	I0804 00:34:48.699371   21140 round_trippers.go:469] Request Headers:
	I0804 00:34:48.699379   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:34:48.699383   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:34:48.702332   21140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0804 00:34:49.199373   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
	I0804 00:34:49.199392   21140 round_trippers.go:469] Request Headers:
	I0804 00:34:49.199399   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:34:49.199404   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:34:49.202714   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:34:49.700177   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
	I0804 00:34:49.700199   21140 round_trippers.go:469] Request Headers:
	I0804 00:34:49.700207   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:34:49.700212   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:34:49.703239   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:34:50.200362   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
	I0804 00:34:50.200388   21140 round_trippers.go:469] Request Headers:
	I0804 00:34:50.200399   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:34:50.200407   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:34:50.204353   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:34:50.205240   21140 node_ready.go:53] node "ha-230158-m02" has status "Ready":"False"
	I0804 00:34:50.699415   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
	I0804 00:34:50.699443   21140 round_trippers.go:469] Request Headers:
	I0804 00:34:50.699451   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:34:50.699456   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:34:50.702632   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:34:51.199336   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
	I0804 00:34:51.199356   21140 round_trippers.go:469] Request Headers:
	I0804 00:34:51.199365   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:34:51.199372   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:34:51.202414   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:34:51.699440   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
	I0804 00:34:51.699463   21140 round_trippers.go:469] Request Headers:
	I0804 00:34:51.699470   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:34:51.699474   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:34:51.702837   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:34:52.199493   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
	I0804 00:34:52.199515   21140 round_trippers.go:469] Request Headers:
	I0804 00:34:52.199522   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:34:52.199527   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:34:52.202962   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:34:52.700340   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
	I0804 00:34:52.700361   21140 round_trippers.go:469] Request Headers:
	I0804 00:34:52.700370   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:34:52.700374   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:34:52.704175   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:34:52.705307   21140 node_ready.go:53] node "ha-230158-m02" has status "Ready":"False"
	I0804 00:34:53.200250   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
	I0804 00:34:53.200273   21140 round_trippers.go:469] Request Headers:
	I0804 00:34:53.200282   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:34:53.200286   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:34:53.203612   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:34:53.699935   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
	I0804 00:34:53.699956   21140 round_trippers.go:469] Request Headers:
	I0804 00:34:53.699963   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:34:53.699966   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:34:53.703682   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:34:54.199619   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
	I0804 00:34:54.199644   21140 round_trippers.go:469] Request Headers:
	I0804 00:34:54.199656   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:34:54.199662   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:34:54.202953   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:34:54.699443   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
	I0804 00:34:54.699466   21140 round_trippers.go:469] Request Headers:
	I0804 00:34:54.699474   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:34:54.699477   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:34:54.702915   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:34:55.199839   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
	I0804 00:34:55.199860   21140 round_trippers.go:469] Request Headers:
	I0804 00:34:55.199868   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:34:55.199873   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:34:55.203990   21140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0804 00:34:55.204671   21140 node_ready.go:53] node "ha-230158-m02" has status "Ready":"False"
	I0804 00:34:55.700081   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
	I0804 00:34:55.700106   21140 round_trippers.go:469] Request Headers:
	I0804 00:34:55.700118   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:34:55.700123   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:34:55.703768   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:34:56.200350   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
	I0804 00:34:56.200374   21140 round_trippers.go:469] Request Headers:
	I0804 00:34:56.200386   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:34:56.200391   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:34:56.204003   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:34:56.700084   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
	I0804 00:34:56.700107   21140 round_trippers.go:469] Request Headers:
	I0804 00:34:56.700115   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:34:56.700119   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:34:56.703414   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:34:57.199662   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
	I0804 00:34:57.199686   21140 round_trippers.go:469] Request Headers:
	I0804 00:34:57.199697   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:34:57.199702   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:34:57.207529   21140 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0804 00:34:57.208233   21140 node_ready.go:53] node "ha-230158-m02" has status "Ready":"False"
	I0804 00:34:57.699361   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
	I0804 00:34:57.699387   21140 round_trippers.go:469] Request Headers:
	I0804 00:34:57.699396   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:34:57.699401   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:34:57.702114   21140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0804 00:34:57.702628   21140 node_ready.go:49] node "ha-230158-m02" has status "Ready":"True"
	I0804 00:34:57.702649   21140 node_ready.go:38] duration metric: took 21.503473952s for node "ha-230158-m02" to be "Ready" ...
	I0804 00:34:57.702657   21140 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:34:57.702710   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods
	I0804 00:34:57.702718   21140 round_trippers.go:469] Request Headers:
	I0804 00:34:57.702725   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:34:57.702731   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:34:57.707156   21140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0804 00:34:57.713455   21140 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-cqbjc" in "kube-system" namespace to be "Ready" ...
	I0804 00:34:57.713525   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-cqbjc
	I0804 00:34:57.713531   21140 round_trippers.go:469] Request Headers:
	I0804 00:34:57.713538   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:34:57.713543   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:34:57.716204   21140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0804 00:34:57.716817   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158
	I0804 00:34:57.716832   21140 round_trippers.go:469] Request Headers:
	I0804 00:34:57.716839   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:34:57.716843   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:34:57.719114   21140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0804 00:34:57.719698   21140 pod_ready.go:92] pod "coredns-7db6d8ff4d-cqbjc" in "kube-system" namespace has status "Ready":"True"
	I0804 00:34:57.719715   21140 pod_ready.go:81] duration metric: took 6.238758ms for pod "coredns-7db6d8ff4d-cqbjc" in "kube-system" namespace to be "Ready" ...
	I0804 00:34:57.719726   21140 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-xt2gb" in "kube-system" namespace to be "Ready" ...
	I0804 00:34:57.719778   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-xt2gb
	I0804 00:34:57.719785   21140 round_trippers.go:469] Request Headers:
	I0804 00:34:57.719794   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:34:57.719800   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:34:57.721849   21140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0804 00:34:57.722448   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158
	I0804 00:34:57.722461   21140 round_trippers.go:469] Request Headers:
	I0804 00:34:57.722467   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:34:57.722470   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:34:57.724829   21140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0804 00:34:57.725653   21140 pod_ready.go:92] pod "coredns-7db6d8ff4d-xt2gb" in "kube-system" namespace has status "Ready":"True"
	I0804 00:34:57.725669   21140 pod_ready.go:81] duration metric: took 5.935947ms for pod "coredns-7db6d8ff4d-xt2gb" in "kube-system" namespace to be "Ready" ...
	I0804 00:34:57.725677   21140 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-230158" in "kube-system" namespace to be "Ready" ...
	I0804 00:34:57.725714   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/etcd-ha-230158
	I0804 00:34:57.725722   21140 round_trippers.go:469] Request Headers:
	I0804 00:34:57.725728   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:34:57.725734   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:34:57.727968   21140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0804 00:34:57.728620   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158
	I0804 00:34:57.728638   21140 round_trippers.go:469] Request Headers:
	I0804 00:34:57.728647   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:34:57.728651   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:34:57.730852   21140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0804 00:34:57.731781   21140 pod_ready.go:92] pod "etcd-ha-230158" in "kube-system" namespace has status "Ready":"True"
	I0804 00:34:57.731796   21140 pod_ready.go:81] duration metric: took 6.114243ms for pod "etcd-ha-230158" in "kube-system" namespace to be "Ready" ...
	I0804 00:34:57.731803   21140 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-230158-m02" in "kube-system" namespace to be "Ready" ...
	I0804 00:34:57.731848   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/etcd-ha-230158-m02
	I0804 00:34:57.731857   21140 round_trippers.go:469] Request Headers:
	I0804 00:34:57.731867   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:34:57.731876   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:34:57.734086   21140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0804 00:34:57.734660   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
	I0804 00:34:57.734675   21140 round_trippers.go:469] Request Headers:
	I0804 00:34:57.734684   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:34:57.734695   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:34:57.736819   21140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0804 00:34:57.737268   21140 pod_ready.go:92] pod "etcd-ha-230158-m02" in "kube-system" namespace has status "Ready":"True"
	I0804 00:34:57.737286   21140 pod_ready.go:81] duration metric: took 5.477087ms for pod "etcd-ha-230158-m02" in "kube-system" namespace to be "Ready" ...
	I0804 00:34:57.737303   21140 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-230158" in "kube-system" namespace to be "Ready" ...
	I0804 00:34:57.899675   21140 request.go:629] Waited for 162.319339ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-230158
	I0804 00:34:57.899773   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-230158
	I0804 00:34:57.899784   21140 round_trippers.go:469] Request Headers:
	I0804 00:34:57.899796   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:34:57.899803   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:34:57.903095   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:34:58.099982   21140 request.go:629] Waited for 196.23398ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-230158
	I0804 00:34:58.100033   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158
	I0804 00:34:58.100038   21140 round_trippers.go:469] Request Headers:
	I0804 00:34:58.100050   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:34:58.100055   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:34:58.103327   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:34:58.103770   21140 pod_ready.go:92] pod "kube-apiserver-ha-230158" in "kube-system" namespace has status "Ready":"True"
	I0804 00:34:58.103785   21140 pod_ready.go:81] duration metric: took 366.474938ms for pod "kube-apiserver-ha-230158" in "kube-system" namespace to be "Ready" ...
	I0804 00:34:58.103794   21140 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-230158-m02" in "kube-system" namespace to be "Ready" ...
	I0804 00:34:58.299984   21140 request.go:629] Waited for 196.13474ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-230158-m02
	I0804 00:34:58.300050   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-230158-m02
	I0804 00:34:58.300055   21140 round_trippers.go:469] Request Headers:
	I0804 00:34:58.300063   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:34:58.300066   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:34:58.303008   21140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0804 00:34:58.500054   21140 request.go:629] Waited for 196.35867ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
	I0804 00:34:58.500106   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
	I0804 00:34:58.500110   21140 round_trippers.go:469] Request Headers:
	I0804 00:34:58.500117   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:34:58.500122   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:34:58.503069   21140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0804 00:34:58.503523   21140 pod_ready.go:92] pod "kube-apiserver-ha-230158-m02" in "kube-system" namespace has status "Ready":"True"
	I0804 00:34:58.503541   21140 pod_ready.go:81] duration metric: took 399.740623ms for pod "kube-apiserver-ha-230158-m02" in "kube-system" namespace to be "Ready" ...
	I0804 00:34:58.503550   21140 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-230158" in "kube-system" namespace to be "Ready" ...
	I0804 00:34:58.699633   21140 request.go:629] Waited for 195.997904ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-230158
	I0804 00:34:58.699685   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-230158
	I0804 00:34:58.699690   21140 round_trippers.go:469] Request Headers:
	I0804 00:34:58.699697   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:34:58.699702   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:34:58.703099   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:34:58.900061   21140 request.go:629] Waited for 196.396916ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-230158
	I0804 00:34:58.900138   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158
	I0804 00:34:58.900150   21140 round_trippers.go:469] Request Headers:
	I0804 00:34:58.900162   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:34:58.900174   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:34:58.903341   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:34:58.903879   21140 pod_ready.go:92] pod "kube-controller-manager-ha-230158" in "kube-system" namespace has status "Ready":"True"
	I0804 00:34:58.903904   21140 pod_ready.go:81] duration metric: took 400.346324ms for pod "kube-controller-manager-ha-230158" in "kube-system" namespace to be "Ready" ...
	I0804 00:34:58.903917   21140 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-230158-m02" in "kube-system" namespace to be "Ready" ...
	I0804 00:34:59.099943   21140 request.go:629] Waited for 195.954598ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-230158-m02
	I0804 00:34:59.100017   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-230158-m02
	I0804 00:34:59.100024   21140 round_trippers.go:469] Request Headers:
	I0804 00:34:59.100031   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:34:59.100035   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:34:59.103509   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:34:59.299446   21140 request.go:629] Waited for 195.230977ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
	I0804 00:34:59.299526   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
	I0804 00:34:59.299541   21140 round_trippers.go:469] Request Headers:
	I0804 00:34:59.299553   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:34:59.299557   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:34:59.302558   21140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0804 00:34:59.303339   21140 pod_ready.go:92] pod "kube-controller-manager-ha-230158-m02" in "kube-system" namespace has status "Ready":"True"
	I0804 00:34:59.303356   21140 pod_ready.go:81] duration metric: took 399.432866ms for pod "kube-controller-manager-ha-230158-m02" in "kube-system" namespace to be "Ready" ...
	I0804 00:34:59.303364   21140 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8tgp2" in "kube-system" namespace to be "Ready" ...
	I0804 00:34:59.500018   21140 request.go:629] Waited for 196.594484ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8tgp2
	I0804 00:34:59.500098   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8tgp2
	I0804 00:34:59.500113   21140 round_trippers.go:469] Request Headers:
	I0804 00:34:59.500128   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:34:59.500140   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:34:59.503548   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:34:59.699517   21140 request.go:629] Waited for 195.278381ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
	I0804 00:34:59.699567   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
	I0804 00:34:59.699572   21140 round_trippers.go:469] Request Headers:
	I0804 00:34:59.699579   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:34:59.699582   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:34:59.702996   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:34:59.703492   21140 pod_ready.go:92] pod "kube-proxy-8tgp2" in "kube-system" namespace has status "Ready":"True"
	I0804 00:34:59.703510   21140 pod_ready.go:81] duration metric: took 400.140483ms for pod "kube-proxy-8tgp2" in "kube-system" namespace to be "Ready" ...
	I0804 00:34:59.703519   21140 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vdn92" in "kube-system" namespace to be "Ready" ...
	I0804 00:34:59.899662   21140 request.go:629] Waited for 196.079238ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vdn92
	I0804 00:34:59.899722   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vdn92
	I0804 00:34:59.899742   21140 round_trippers.go:469] Request Headers:
	I0804 00:34:59.899755   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:34:59.899761   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:34:59.903971   21140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0804 00:35:00.100145   21140 request.go:629] Waited for 195.383817ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-230158
	I0804 00:35:00.100208   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158
	I0804 00:35:00.100214   21140 round_trippers.go:469] Request Headers:
	I0804 00:35:00.100222   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:35:00.100227   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:35:00.103195   21140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0804 00:35:00.103921   21140 pod_ready.go:92] pod "kube-proxy-vdn92" in "kube-system" namespace has status "Ready":"True"
	I0804 00:35:00.103941   21140 pod_ready.go:81] duration metric: took 400.4062ms for pod "kube-proxy-vdn92" in "kube-system" namespace to be "Ready" ...
	I0804 00:35:00.103950   21140 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-230158" in "kube-system" namespace to be "Ready" ...
	I0804 00:35:00.300134   21140 request.go:629] Waited for 196.118329ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-230158
	I0804 00:35:00.300211   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-230158
	I0804 00:35:00.300217   21140 round_trippers.go:469] Request Headers:
	I0804 00:35:00.300224   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:35:00.300232   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:35:00.303575   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:35:00.499708   21140 request.go:629] Waited for 195.37409ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-230158
	I0804 00:35:00.499783   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158
	I0804 00:35:00.499788   21140 round_trippers.go:469] Request Headers:
	I0804 00:35:00.499796   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:35:00.499800   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:35:00.503391   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:35:00.503811   21140 pod_ready.go:92] pod "kube-scheduler-ha-230158" in "kube-system" namespace has status "Ready":"True"
	I0804 00:35:00.503826   21140 pod_ready.go:81] duration metric: took 399.870925ms for pod "kube-scheduler-ha-230158" in "kube-system" namespace to be "Ready" ...
	I0804 00:35:00.503837   21140 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-230158-m02" in "kube-system" namespace to be "Ready" ...
	I0804 00:35:00.700071   21140 request.go:629] Waited for 196.180127ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-230158-m02
	I0804 00:35:00.700123   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-230158-m02
	I0804 00:35:00.700128   21140 round_trippers.go:469] Request Headers:
	I0804 00:35:00.700141   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:35:00.700144   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:35:00.703799   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:35:00.899942   21140 request.go:629] Waited for 195.429445ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
	I0804 00:35:00.899994   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
	I0804 00:35:00.899999   21140 round_trippers.go:469] Request Headers:
	I0804 00:35:00.900006   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:35:00.900011   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:35:00.903149   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:35:00.903698   21140 pod_ready.go:92] pod "kube-scheduler-ha-230158-m02" in "kube-system" namespace has status "Ready":"True"
	I0804 00:35:00.903715   21140 pod_ready.go:81] duration metric: took 399.871252ms for pod "kube-scheduler-ha-230158-m02" in "kube-system" namespace to be "Ready" ...
	I0804 00:35:00.903725   21140 pod_ready.go:38] duration metric: took 3.201056231s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:35:00.903743   21140 api_server.go:52] waiting for apiserver process to appear ...
	I0804 00:35:00.903790   21140 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:35:00.919660   21140 api_server.go:72] duration metric: took 25.100801381s to wait for apiserver process to appear ...
	I0804 00:35:00.919693   21140 api_server.go:88] waiting for apiserver healthz status ...
	I0804 00:35:00.919712   21140 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8443/healthz ...
	I0804 00:35:00.927575   21140 api_server.go:279] https://192.168.39.132:8443/healthz returned 200:
	ok
	I0804 00:35:00.927643   21140 round_trippers.go:463] GET https://192.168.39.132:8443/version
	I0804 00:35:00.927653   21140 round_trippers.go:469] Request Headers:
	I0804 00:35:00.927664   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:35:00.927670   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:35:00.929869   21140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0804 00:35:00.930059   21140 api_server.go:141] control plane version: v1.30.3
	I0804 00:35:00.930081   21140 api_server.go:131] duration metric: took 10.380541ms to wait for apiserver health ...
	I0804 00:35:00.930091   21140 system_pods.go:43] waiting for kube-system pods to appear ...
	I0804 00:35:01.099684   21140 request.go:629] Waited for 169.52597ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods
	I0804 00:35:01.099769   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods
	I0804 00:35:01.099779   21140 round_trippers.go:469] Request Headers:
	I0804 00:35:01.099790   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:35:01.099803   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:35:01.105320   21140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0804 00:35:01.110892   21140 system_pods.go:59] 17 kube-system pods found
	I0804 00:35:01.110925   21140 system_pods.go:61] "coredns-7db6d8ff4d-cqbjc" [d99b5cde-3b5b-4c29-82c4-ec9fa36b4479] Running
	I0804 00:35:01.110932   21140 system_pods.go:61] "coredns-7db6d8ff4d-xt2gb" [2bd541a1-7bf0-4709-b600-365d5527b936] Running
	I0804 00:35:01.110938   21140 system_pods.go:61] "etcd-ha-230158" [dc6a8dde-229d-4857-8f08-dcc8399b1420] Running
	I0804 00:35:01.110943   21140 system_pods.go:61] "etcd-ha-230158-m02" [ed2085f3-8b06-4e15-8ed3-bd434d9aaebb] Running
	I0804 00:35:01.110947   21140 system_pods.go:61] "kindnet-n5cql" [56108054-acd3-48ae-b929-75bd31cbd1ad] Running
	I0804 00:35:01.110956   21140 system_pods.go:61] "kindnet-wfd5t" [b7ccd328-13aa-4161-8a20-5df8d153592f] Running
	I0804 00:35:01.110961   21140 system_pods.go:61] "kube-apiserver-ha-230158" [8c1d6b4d-e30e-4b30-84ff-f53490a7d9ec] Running
	I0804 00:35:01.110967   21140 system_pods.go:61] "kube-apiserver-ha-230158-m02" [8d384508-62d2-450a-a512-622aac96913a] Running
	I0804 00:35:01.110972   21140 system_pods.go:61] "kube-controller-manager-ha-230158" [cf39dcfb-ca37-45e7-9306-456ea22b484c] Running
	I0804 00:35:01.110977   21140 system_pods.go:61] "kube-controller-manager-ha-230158-m02" [c751903c-cb15-4718-87d7-f9ccf79d5869] Running
	I0804 00:35:01.110983   21140 system_pods.go:61] "kube-proxy-8tgp2" [17ce55b9-8d25-4b4a-9b12-ff2cb84c22fa] Running
	I0804 00:35:01.110987   21140 system_pods.go:61] "kube-proxy-vdn92" [02c77eda-8f0e-49d4-ae42-bbf18d0eeaf5] Running
	I0804 00:35:01.110990   21140 system_pods.go:61] "kube-scheduler-ha-230158" [c24d7658-a418-4a21-8e93-e31af5d65e05] Running
	I0804 00:35:01.110993   21140 system_pods.go:61] "kube-scheduler-ha-230158-m02" [97d10375-f0ca-4e13-bc7b-8d775aea4678] Running
	I0804 00:35:01.110997   21140 system_pods.go:61] "kube-vip-ha-230158" [f784b7b5-0db7-49f2-bcac-3a0dbeee74dd] Running
	I0804 00:35:01.111000   21140 system_pods.go:61] "kube-vip-ha-230158-m02" [0c04a6aa-7d79-4318-9cd7-b936d3358e19] Running
	I0804 00:35:01.111003   21140 system_pods.go:61] "storage-provisioner" [653e0c50-af0a-4708-aaa9-b0d63616df94] Running
	I0804 00:35:01.111009   21140 system_pods.go:74] duration metric: took 180.911846ms to wait for pod list to return data ...
	I0804 00:35:01.111018   21140 default_sa.go:34] waiting for default service account to be created ...
	I0804 00:35:01.299365   21140 request.go:629] Waited for 188.274972ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/default/serviceaccounts
	I0804 00:35:01.299415   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/default/serviceaccounts
	I0804 00:35:01.299421   21140 round_trippers.go:469] Request Headers:
	I0804 00:35:01.299429   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:35:01.299435   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:35:01.302985   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:35:01.303165   21140 default_sa.go:45] found service account: "default"
	I0804 00:35:01.303184   21140 default_sa.go:55] duration metric: took 192.159471ms for default service account to be created ...
	I0804 00:35:01.303192   21140 system_pods.go:116] waiting for k8s-apps to be running ...
	I0804 00:35:01.499542   21140 request.go:629] Waited for 196.290629ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods
	I0804 00:35:01.499612   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods
	I0804 00:35:01.499619   21140 round_trippers.go:469] Request Headers:
	I0804 00:35:01.499627   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:35:01.499632   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:35:01.504649   21140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0804 00:35:01.510115   21140 system_pods.go:86] 17 kube-system pods found
	I0804 00:35:01.510141   21140 system_pods.go:89] "coredns-7db6d8ff4d-cqbjc" [d99b5cde-3b5b-4c29-82c4-ec9fa36b4479] Running
	I0804 00:35:01.510151   21140 system_pods.go:89] "coredns-7db6d8ff4d-xt2gb" [2bd541a1-7bf0-4709-b600-365d5527b936] Running
	I0804 00:35:01.510156   21140 system_pods.go:89] "etcd-ha-230158" [dc6a8dde-229d-4857-8f08-dcc8399b1420] Running
	I0804 00:35:01.510161   21140 system_pods.go:89] "etcd-ha-230158-m02" [ed2085f3-8b06-4e15-8ed3-bd434d9aaebb] Running
	I0804 00:35:01.510168   21140 system_pods.go:89] "kindnet-n5cql" [56108054-acd3-48ae-b929-75bd31cbd1ad] Running
	I0804 00:35:01.510173   21140 system_pods.go:89] "kindnet-wfd5t" [b7ccd328-13aa-4161-8a20-5df8d153592f] Running
	I0804 00:35:01.510192   21140 system_pods.go:89] "kube-apiserver-ha-230158" [8c1d6b4d-e30e-4b30-84ff-f53490a7d9ec] Running
	I0804 00:35:01.510199   21140 system_pods.go:89] "kube-apiserver-ha-230158-m02" [8d384508-62d2-450a-a512-622aac96913a] Running
	I0804 00:35:01.510207   21140 system_pods.go:89] "kube-controller-manager-ha-230158" [cf39dcfb-ca37-45e7-9306-456ea22b484c] Running
	I0804 00:35:01.510212   21140 system_pods.go:89] "kube-controller-manager-ha-230158-m02" [c751903c-cb15-4718-87d7-f9ccf79d5869] Running
	I0804 00:35:01.510218   21140 system_pods.go:89] "kube-proxy-8tgp2" [17ce55b9-8d25-4b4a-9b12-ff2cb84c22fa] Running
	I0804 00:35:01.510222   21140 system_pods.go:89] "kube-proxy-vdn92" [02c77eda-8f0e-49d4-ae42-bbf18d0eeaf5] Running
	I0804 00:35:01.510228   21140 system_pods.go:89] "kube-scheduler-ha-230158" [c24d7658-a418-4a21-8e93-e31af5d65e05] Running
	I0804 00:35:01.510245   21140 system_pods.go:89] "kube-scheduler-ha-230158-m02" [97d10375-f0ca-4e13-bc7b-8d775aea4678] Running
	I0804 00:35:01.510254   21140 system_pods.go:89] "kube-vip-ha-230158" [f784b7b5-0db7-49f2-bcac-3a0dbeee74dd] Running
	I0804 00:35:01.510259   21140 system_pods.go:89] "kube-vip-ha-230158-m02" [0c04a6aa-7d79-4318-9cd7-b936d3358e19] Running
	I0804 00:35:01.510266   21140 system_pods.go:89] "storage-provisioner" [653e0c50-af0a-4708-aaa9-b0d63616df94] Running
	I0804 00:35:01.510274   21140 system_pods.go:126] duration metric: took 207.074596ms to wait for k8s-apps to be running ...
	I0804 00:35:01.510286   21140 system_svc.go:44] waiting for kubelet service to be running ....
	I0804 00:35:01.510326   21140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:35:01.527215   21140 system_svc.go:56] duration metric: took 16.92222ms WaitForService to wait for kubelet
	I0804 00:35:01.527241   21140 kubeadm.go:582] duration metric: took 25.708386161s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0804 00:35:01.527263   21140 node_conditions.go:102] verifying NodePressure condition ...
	I0804 00:35:01.699586   21140 request.go:629] Waited for 172.25436ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes
	I0804 00:35:01.699658   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes
	I0804 00:35:01.699664   21140 round_trippers.go:469] Request Headers:
	I0804 00:35:01.699671   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:35:01.699676   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:35:01.703487   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:35:01.704426   21140 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0804 00:35:01.704448   21140 node_conditions.go:123] node cpu capacity is 2
	I0804 00:35:01.704458   21140 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0804 00:35:01.704461   21140 node_conditions.go:123] node cpu capacity is 2
	I0804 00:35:01.704465   21140 node_conditions.go:105] duration metric: took 177.197702ms to run NodePressure ...
	I0804 00:35:01.704478   21140 start.go:241] waiting for startup goroutines ...
	I0804 00:35:01.704509   21140 start.go:255] writing updated cluster config ...
	I0804 00:35:01.706635   21140 out.go:177] 
	I0804 00:35:01.708170   21140 config.go:182] Loaded profile config "ha-230158": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0804 00:35:01.708270   21140 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/config.json ...
	I0804 00:35:01.709941   21140 out.go:177] * Starting "ha-230158-m03" control-plane node in "ha-230158" cluster
	I0804 00:35:01.711379   21140 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0804 00:35:01.711400   21140 cache.go:56] Caching tarball of preloaded images
	I0804 00:35:01.711488   21140 preload.go:172] Found /home/jenkins/minikube-integration/19364-3947/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0804 00:35:01.711501   21140 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0804 00:35:01.711588   21140 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/config.json ...
	I0804 00:35:01.711745   21140 start.go:360] acquireMachinesLock for ha-230158-m03: {Name:mk3c8b650475b5a29be5f1e49e0345d4de7c1632 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0804 00:35:01.711784   21140 start.go:364] duration metric: took 22.409µs to acquireMachinesLock for "ha-230158-m03"
	I0804 00:35:01.711800   21140 start.go:93] Provisioning new machine with config: &{Name:ha-230158 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-230158 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.132 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.188 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0804 00:35:01.711916   21140 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0804 00:35:01.713379   21140 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0804 00:35:01.713453   21140 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:35:01.713490   21140 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:35:01.728747   21140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42461
	I0804 00:35:01.729142   21140 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:35:01.729578   21140 main.go:141] libmachine: Using API Version  1
	I0804 00:35:01.729600   21140 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:35:01.729919   21140 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:35:01.730104   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetMachineName
	I0804 00:35:01.730287   21140 main.go:141] libmachine: (ha-230158-m03) Calling .DriverName
	I0804 00:35:01.730456   21140 start.go:159] libmachine.API.Create for "ha-230158" (driver="kvm2")
	I0804 00:35:01.730487   21140 client.go:168] LocalClient.Create starting
	I0804 00:35:01.730521   21140 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem
	I0804 00:35:01.730562   21140 main.go:141] libmachine: Decoding PEM data...
	I0804 00:35:01.730584   21140 main.go:141] libmachine: Parsing certificate...
	I0804 00:35:01.730648   21140 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19364-3947/.minikube/certs/cert.pem
	I0804 00:35:01.730674   21140 main.go:141] libmachine: Decoding PEM data...
	I0804 00:35:01.730690   21140 main.go:141] libmachine: Parsing certificate...
	I0804 00:35:01.730714   21140 main.go:141] libmachine: Running pre-create checks...
	I0804 00:35:01.730726   21140 main.go:141] libmachine: (ha-230158-m03) Calling .PreCreateCheck
	I0804 00:35:01.730876   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetConfigRaw
	I0804 00:35:01.732019   21140 main.go:141] libmachine: Creating machine...
	I0804 00:35:01.732037   21140 main.go:141] libmachine: (ha-230158-m03) Calling .Create
	I0804 00:35:01.732201   21140 main.go:141] libmachine: (ha-230158-m03) Creating KVM machine...
	I0804 00:35:01.733430   21140 main.go:141] libmachine: (ha-230158-m03) DBG | found existing default KVM network
	I0804 00:35:01.733570   21140 main.go:141] libmachine: (ha-230158-m03) DBG | found existing private KVM network mk-ha-230158
	I0804 00:35:01.733660   21140 main.go:141] libmachine: (ha-230158-m03) Setting up store path in /home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m03 ...
	I0804 00:35:01.733700   21140 main.go:141] libmachine: (ha-230158-m03) Building disk image from file:///home/jenkins/minikube-integration/19364-3947/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0804 00:35:01.733750   21140 main.go:141] libmachine: (ha-230158-m03) DBG | I0804 00:35:01.733651   22024 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19364-3947/.minikube
	I0804 00:35:01.733838   21140 main.go:141] libmachine: (ha-230158-m03) Downloading /home/jenkins/minikube-integration/19364-3947/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19364-3947/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0804 00:35:01.963276   21140 main.go:141] libmachine: (ha-230158-m03) DBG | I0804 00:35:01.963145   22024 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m03/id_rsa...
	I0804 00:35:02.150959   21140 main.go:141] libmachine: (ha-230158-m03) DBG | I0804 00:35:02.150818   22024 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m03/ha-230158-m03.rawdisk...
	I0804 00:35:02.150990   21140 main.go:141] libmachine: (ha-230158-m03) DBG | Writing magic tar header
	I0804 00:35:02.151004   21140 main.go:141] libmachine: (ha-230158-m03) DBG | Writing SSH key tar header
	I0804 00:35:02.151017   21140 main.go:141] libmachine: (ha-230158-m03) DBG | I0804 00:35:02.150934   22024 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m03 ...
	I0804 00:35:02.151033   21140 main.go:141] libmachine: (ha-230158-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m03
	I0804 00:35:02.151053   21140 main.go:141] libmachine: (ha-230158-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-3947/.minikube/machines
	I0804 00:35:02.151069   21140 main.go:141] libmachine: (ha-230158-m03) Setting executable bit set on /home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m03 (perms=drwx------)
	I0804 00:35:02.151077   21140 main.go:141] libmachine: (ha-230158-m03) Setting executable bit set on /home/jenkins/minikube-integration/19364-3947/.minikube/machines (perms=drwxr-xr-x)
	I0804 00:35:02.151085   21140 main.go:141] libmachine: (ha-230158-m03) Setting executable bit set on /home/jenkins/minikube-integration/19364-3947/.minikube (perms=drwxr-xr-x)
	I0804 00:35:02.151097   21140 main.go:141] libmachine: (ha-230158-m03) Setting executable bit set on /home/jenkins/minikube-integration/19364-3947 (perms=drwxrwxr-x)
	I0804 00:35:02.151109   21140 main.go:141] libmachine: (ha-230158-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-3947/.minikube
	I0804 00:35:02.151121   21140 main.go:141] libmachine: (ha-230158-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0804 00:35:02.151136   21140 main.go:141] libmachine: (ha-230158-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0804 00:35:02.151147   21140 main.go:141] libmachine: (ha-230158-m03) Creating domain...
	I0804 00:35:02.151185   21140 main.go:141] libmachine: (ha-230158-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-3947
	I0804 00:35:02.151213   21140 main.go:141] libmachine: (ha-230158-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0804 00:35:02.151224   21140 main.go:141] libmachine: (ha-230158-m03) DBG | Checking permissions on dir: /home/jenkins
	I0804 00:35:02.151238   21140 main.go:141] libmachine: (ha-230158-m03) DBG | Checking permissions on dir: /home
	I0804 00:35:02.151278   21140 main.go:141] libmachine: (ha-230158-m03) DBG | Skipping /home - not owner
	I0804 00:35:02.152048   21140 main.go:141] libmachine: (ha-230158-m03) define libvirt domain using xml: 
	I0804 00:35:02.152072   21140 main.go:141] libmachine: (ha-230158-m03) <domain type='kvm'>
	I0804 00:35:02.152080   21140 main.go:141] libmachine: (ha-230158-m03)   <name>ha-230158-m03</name>
	I0804 00:35:02.152085   21140 main.go:141] libmachine: (ha-230158-m03)   <memory unit='MiB'>2200</memory>
	I0804 00:35:02.152090   21140 main.go:141] libmachine: (ha-230158-m03)   <vcpu>2</vcpu>
	I0804 00:35:02.152101   21140 main.go:141] libmachine: (ha-230158-m03)   <features>
	I0804 00:35:02.152120   21140 main.go:141] libmachine: (ha-230158-m03)     <acpi/>
	I0804 00:35:02.152127   21140 main.go:141] libmachine: (ha-230158-m03)     <apic/>
	I0804 00:35:02.152135   21140 main.go:141] libmachine: (ha-230158-m03)     <pae/>
	I0804 00:35:02.152153   21140 main.go:141] libmachine: (ha-230158-m03)     
	I0804 00:35:02.152164   21140 main.go:141] libmachine: (ha-230158-m03)   </features>
	I0804 00:35:02.152172   21140 main.go:141] libmachine: (ha-230158-m03)   <cpu mode='host-passthrough'>
	I0804 00:35:02.152181   21140 main.go:141] libmachine: (ha-230158-m03)   
	I0804 00:35:02.152186   21140 main.go:141] libmachine: (ha-230158-m03)   </cpu>
	I0804 00:35:02.152212   21140 main.go:141] libmachine: (ha-230158-m03)   <os>
	I0804 00:35:02.152219   21140 main.go:141] libmachine: (ha-230158-m03)     <type>hvm</type>
	I0804 00:35:02.152228   21140 main.go:141] libmachine: (ha-230158-m03)     <boot dev='cdrom'/>
	I0804 00:35:02.152238   21140 main.go:141] libmachine: (ha-230158-m03)     <boot dev='hd'/>
	I0804 00:35:02.152248   21140 main.go:141] libmachine: (ha-230158-m03)     <bootmenu enable='no'/>
	I0804 00:35:02.152262   21140 main.go:141] libmachine: (ha-230158-m03)   </os>
	I0804 00:35:02.152273   21140 main.go:141] libmachine: (ha-230158-m03)   <devices>
	I0804 00:35:02.152284   21140 main.go:141] libmachine: (ha-230158-m03)     <disk type='file' device='cdrom'>
	I0804 00:35:02.152296   21140 main.go:141] libmachine: (ha-230158-m03)       <source file='/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m03/boot2docker.iso'/>
	I0804 00:35:02.152306   21140 main.go:141] libmachine: (ha-230158-m03)       <target dev='hdc' bus='scsi'/>
	I0804 00:35:02.152315   21140 main.go:141] libmachine: (ha-230158-m03)       <readonly/>
	I0804 00:35:02.152329   21140 main.go:141] libmachine: (ha-230158-m03)     </disk>
	I0804 00:35:02.152340   21140 main.go:141] libmachine: (ha-230158-m03)     <disk type='file' device='disk'>
	I0804 00:35:02.152352   21140 main.go:141] libmachine: (ha-230158-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0804 00:35:02.152365   21140 main.go:141] libmachine: (ha-230158-m03)       <source file='/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m03/ha-230158-m03.rawdisk'/>
	I0804 00:35:02.152374   21140 main.go:141] libmachine: (ha-230158-m03)       <target dev='hda' bus='virtio'/>
	I0804 00:35:02.152379   21140 main.go:141] libmachine: (ha-230158-m03)     </disk>
	I0804 00:35:02.152388   21140 main.go:141] libmachine: (ha-230158-m03)     <interface type='network'>
	I0804 00:35:02.152415   21140 main.go:141] libmachine: (ha-230158-m03)       <source network='mk-ha-230158'/>
	I0804 00:35:02.152442   21140 main.go:141] libmachine: (ha-230158-m03)       <model type='virtio'/>
	I0804 00:35:02.152464   21140 main.go:141] libmachine: (ha-230158-m03)     </interface>
	I0804 00:35:02.152481   21140 main.go:141] libmachine: (ha-230158-m03)     <interface type='network'>
	I0804 00:35:02.152495   21140 main.go:141] libmachine: (ha-230158-m03)       <source network='default'/>
	I0804 00:35:02.152503   21140 main.go:141] libmachine: (ha-230158-m03)       <model type='virtio'/>
	I0804 00:35:02.152511   21140 main.go:141] libmachine: (ha-230158-m03)     </interface>
	I0804 00:35:02.152517   21140 main.go:141] libmachine: (ha-230158-m03)     <serial type='pty'>
	I0804 00:35:02.152524   21140 main.go:141] libmachine: (ha-230158-m03)       <target port='0'/>
	I0804 00:35:02.152531   21140 main.go:141] libmachine: (ha-230158-m03)     </serial>
	I0804 00:35:02.152540   21140 main.go:141] libmachine: (ha-230158-m03)     <console type='pty'>
	I0804 00:35:02.152555   21140 main.go:141] libmachine: (ha-230158-m03)       <target type='serial' port='0'/>
	I0804 00:35:02.152566   21140 main.go:141] libmachine: (ha-230158-m03)     </console>
	I0804 00:35:02.152575   21140 main.go:141] libmachine: (ha-230158-m03)     <rng model='virtio'>
	I0804 00:35:02.152585   21140 main.go:141] libmachine: (ha-230158-m03)       <backend model='random'>/dev/random</backend>
	I0804 00:35:02.152594   21140 main.go:141] libmachine: (ha-230158-m03)     </rng>
	I0804 00:35:02.152602   21140 main.go:141] libmachine: (ha-230158-m03)     
	I0804 00:35:02.152608   21140 main.go:141] libmachine: (ha-230158-m03)     
	I0804 00:35:02.152615   21140 main.go:141] libmachine: (ha-230158-m03)   </devices>
	I0804 00:35:02.152627   21140 main.go:141] libmachine: (ha-230158-m03) </domain>
	I0804 00:35:02.152641   21140 main.go:141] libmachine: (ha-230158-m03) 
	I0804 00:35:02.159019   21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:5c:f5:c5 in network default
	I0804 00:35:02.159725   21140 main.go:141] libmachine: (ha-230158-m03) Ensuring networks are active...
	I0804 00:35:02.159747   21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:35:02.160530   21140 main.go:141] libmachine: (ha-230158-m03) Ensuring network default is active
	I0804 00:35:02.160945   21140 main.go:141] libmachine: (ha-230158-m03) Ensuring network mk-ha-230158 is active
	I0804 00:35:02.161357   21140 main.go:141] libmachine: (ha-230158-m03) Getting domain xml...
	I0804 00:35:02.162288   21140 main.go:141] libmachine: (ha-230158-m03) Creating domain...
	I0804 00:35:03.416375   21140 main.go:141] libmachine: (ha-230158-m03) Waiting to get IP...
	I0804 00:35:03.417184   21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:35:03.417578   21140 main.go:141] libmachine: (ha-230158-m03) DBG | unable to find current IP address of domain ha-230158-m03 in network mk-ha-230158
	I0804 00:35:03.417618   21140 main.go:141] libmachine: (ha-230158-m03) DBG | I0804 00:35:03.417560   22024 retry.go:31] will retry after 274.137672ms: waiting for machine to come up
	I0804 00:35:03.693121   21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:35:03.693660   21140 main.go:141] libmachine: (ha-230158-m03) DBG | unable to find current IP address of domain ha-230158-m03 in network mk-ha-230158
	I0804 00:35:03.693689   21140 main.go:141] libmachine: (ha-230158-m03) DBG | I0804 00:35:03.693595   22024 retry.go:31] will retry after 356.003158ms: waiting for machine to come up
	I0804 00:35:04.051100   21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:35:04.051561   21140 main.go:141] libmachine: (ha-230158-m03) DBG | unable to find current IP address of domain ha-230158-m03 in network mk-ha-230158
	I0804 00:35:04.051600   21140 main.go:141] libmachine: (ha-230158-m03) DBG | I0804 00:35:04.051508   22024 retry.go:31] will retry after 385.228924ms: waiting for machine to come up
	I0804 00:35:04.437907   21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:35:04.438266   21140 main.go:141] libmachine: (ha-230158-m03) DBG | unable to find current IP address of domain ha-230158-m03 in network mk-ha-230158
	I0804 00:35:04.438294   21140 main.go:141] libmachine: (ha-230158-m03) DBG | I0804 00:35:04.438213   22024 retry.go:31] will retry after 587.872097ms: waiting for machine to come up
	I0804 00:35:05.027968   21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:35:05.028431   21140 main.go:141] libmachine: (ha-230158-m03) DBG | unable to find current IP address of domain ha-230158-m03 in network mk-ha-230158
	I0804 00:35:05.028462   21140 main.go:141] libmachine: (ha-230158-m03) DBG | I0804 00:35:05.028378   22024 retry.go:31] will retry after 473.396768ms: waiting for machine to come up
	I0804 00:35:05.502979   21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:35:05.503346   21140 main.go:141] libmachine: (ha-230158-m03) DBG | unable to find current IP address of domain ha-230158-m03 in network mk-ha-230158
	I0804 00:35:05.503377   21140 main.go:141] libmachine: (ha-230158-m03) DBG | I0804 00:35:05.503286   22024 retry.go:31] will retry after 888.791841ms: waiting for machine to come up
	I0804 00:35:06.393433   21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:35:06.393846   21140 main.go:141] libmachine: (ha-230158-m03) DBG | unable to find current IP address of domain ha-230158-m03 in network mk-ha-230158
	I0804 00:35:06.393879   21140 main.go:141] libmachine: (ha-230158-m03) DBG | I0804 00:35:06.393833   22024 retry.go:31] will retry after 800.330787ms: waiting for machine to come up
	I0804 00:35:07.196097   21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:35:07.196617   21140 main.go:141] libmachine: (ha-230158-m03) DBG | unable to find current IP address of domain ha-230158-m03 in network mk-ha-230158
	I0804 00:35:07.196645   21140 main.go:141] libmachine: (ha-230158-m03) DBG | I0804 00:35:07.196581   22024 retry.go:31] will retry after 1.350308245s: waiting for machine to come up
	I0804 00:35:08.549064   21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:35:08.549491   21140 main.go:141] libmachine: (ha-230158-m03) DBG | unable to find current IP address of domain ha-230158-m03 in network mk-ha-230158
	I0804 00:35:08.549517   21140 main.go:141] libmachine: (ha-230158-m03) DBG | I0804 00:35:08.549449   22024 retry.go:31] will retry after 1.414061347s: waiting for machine to come up
	I0804 00:35:09.964954   21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:35:09.965386   21140 main.go:141] libmachine: (ha-230158-m03) DBG | unable to find current IP address of domain ha-230158-m03 in network mk-ha-230158
	I0804 00:35:09.965415   21140 main.go:141] libmachine: (ha-230158-m03) DBG | I0804 00:35:09.965338   22024 retry.go:31] will retry after 2.016417552s: waiting for machine to come up
	I0804 00:35:11.983856   21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:35:11.984325   21140 main.go:141] libmachine: (ha-230158-m03) DBG | unable to find current IP address of domain ha-230158-m03 in network mk-ha-230158
	I0804 00:35:11.984359   21140 main.go:141] libmachine: (ha-230158-m03) DBG | I0804 00:35:11.984293   22024 retry.go:31] will retry after 2.735425811s: waiting for machine to come up
	I0804 00:35:14.722954   21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:35:14.723405   21140 main.go:141] libmachine: (ha-230158-m03) DBG | unable to find current IP address of domain ha-230158-m03 in network mk-ha-230158
	I0804 00:35:14.723426   21140 main.go:141] libmachine: (ha-230158-m03) DBG | I0804 00:35:14.723375   22024 retry.go:31] will retry after 3.588857245s: waiting for machine to come up
	I0804 00:35:18.314440   21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:35:18.314835   21140 main.go:141] libmachine: (ha-230158-m03) DBG | unable to find current IP address of domain ha-230158-m03 in network mk-ha-230158
	I0804 00:35:18.314861   21140 main.go:141] libmachine: (ha-230158-m03) DBG | I0804 00:35:18.314796   22024 retry.go:31] will retry after 3.432629659s: waiting for machine to come up
	I0804 00:35:21.748758   21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:35:21.749225   21140 main.go:141] libmachine: (ha-230158-m03) Found IP for machine: 192.168.39.35
	I0804 00:35:21.749253   21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has current primary IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:35:21.749262   21140 main.go:141] libmachine: (ha-230158-m03) Reserving static IP address...
	I0804 00:35:21.749675   21140 main.go:141] libmachine: (ha-230158-m03) DBG | unable to find host DHCP lease matching {name: "ha-230158-m03", mac: "52:54:00:df:27:1f", ip: "192.168.39.35"} in network mk-ha-230158
	I0804 00:35:21.820226   21140 main.go:141] libmachine: (ha-230158-m03) DBG | Getting to WaitForSSH function...
	I0804 00:35:21.820257   21140 main.go:141] libmachine: (ha-230158-m03) Reserved static IP address: 192.168.39.35
	I0804 00:35:21.820271   21140 main.go:141] libmachine: (ha-230158-m03) Waiting for SSH to be available...
	I0804 00:35:21.822782   21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:35:21.823219   21140 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:minikube Clientid:01:52:54:00:df:27:1f}
	I0804 00:35:21.823319   21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:35:21.823340   21140 main.go:141] libmachine: (ha-230158-m03) DBG | Using SSH client type: external
	I0804 00:35:21.823356   21140 main.go:141] libmachine: (ha-230158-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m03/id_rsa (-rw-------)
	I0804 00:35:21.823387   21140 main.go:141] libmachine: (ha-230158-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.35 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0804 00:35:21.823405   21140 main.go:141] libmachine: (ha-230158-m03) DBG | About to run SSH command:
	I0804 00:35:21.823420   21140 main.go:141] libmachine: (ha-230158-m03) DBG | exit 0
	I0804 00:35:21.942022   21140 main.go:141] libmachine: (ha-230158-m03) DBG | SSH cmd err, output: <nil>: 
	I0804 00:35:21.942330   21140 main.go:141] libmachine: (ha-230158-m03) KVM machine creation complete!
	I0804 00:35:21.942785   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetConfigRaw
	I0804 00:35:21.943409   21140 main.go:141] libmachine: (ha-230158-m03) Calling .DriverName
	I0804 00:35:21.943631   21140 main.go:141] libmachine: (ha-230158-m03) Calling .DriverName
	I0804 00:35:21.943818   21140 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0804 00:35:21.943835   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetState
	I0804 00:35:21.945112   21140 main.go:141] libmachine: Detecting operating system of created instance...
	I0804 00:35:21.945135   21140 main.go:141] libmachine: Waiting for SSH to be available...
	I0804 00:35:21.945141   21140 main.go:141] libmachine: Getting to WaitForSSH function...
	I0804 00:35:21.945147   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHHostname
	I0804 00:35:21.947237   21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:35:21.947573   21140 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
	I0804 00:35:21.947603   21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:35:21.947719   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHPort
	I0804 00:35:21.947889   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHKeyPath
	I0804 00:35:21.948050   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHKeyPath
	I0804 00:35:21.948187   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHUsername
	I0804 00:35:21.948350   21140 main.go:141] libmachine: Using SSH client type: native
	I0804 00:35:21.948535   21140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.35 22 <nil> <nil>}
	I0804 00:35:21.948547   21140 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0804 00:35:22.045614   21140 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 00:35:22.045637   21140 main.go:141] libmachine: Detecting the provisioner...
	I0804 00:35:22.045645   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHHostname
	I0804 00:35:22.048807   21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:35:22.049223   21140 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
	I0804 00:35:22.049252   21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:35:22.049375   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHPort
	I0804 00:35:22.049569   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHKeyPath
	I0804 00:35:22.049792   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHKeyPath
	I0804 00:35:22.049921   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHUsername
	I0804 00:35:22.050099   21140 main.go:141] libmachine: Using SSH client type: native
	I0804 00:35:22.050313   21140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.35 22 <nil> <nil>}
	I0804 00:35:22.050326   21140 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0804 00:35:22.147137   21140 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0804 00:35:22.147202   21140 main.go:141] libmachine: found compatible host: buildroot
	I0804 00:35:22.147208   21140 main.go:141] libmachine: Provisioning with buildroot...
	I0804 00:35:22.147216   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetMachineName
	I0804 00:35:22.147474   21140 buildroot.go:166] provisioning hostname "ha-230158-m03"
	I0804 00:35:22.147499   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetMachineName
	I0804 00:35:22.147694   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHHostname
	I0804 00:35:22.150147   21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:35:22.150579   21140 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
	I0804 00:35:22.150601   21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:35:22.150796   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHPort
	I0804 00:35:22.150958   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHKeyPath
	I0804 00:35:22.151108   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHKeyPath
	I0804 00:35:22.151221   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHUsername
	I0804 00:35:22.151378   21140 main.go:141] libmachine: Using SSH client type: native
	I0804 00:35:22.151550   21140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.35 22 <nil> <nil>}
	I0804 00:35:22.151566   21140 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-230158-m03 && echo "ha-230158-m03" | sudo tee /etc/hostname
	I0804 00:35:22.265955   21140 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-230158-m03
	
	I0804 00:35:22.265979   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHHostname
	I0804 00:35:22.268571   21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:35:22.268960   21140 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
	I0804 00:35:22.268992   21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:35:22.269150   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHPort
	I0804 00:35:22.269317   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHKeyPath
	I0804 00:35:22.269474   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHKeyPath
	I0804 00:35:22.269644   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHUsername
	I0804 00:35:22.269814   21140 main.go:141] libmachine: Using SSH client type: native
	I0804 00:35:22.269964   21140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.35 22 <nil> <nil>}
	I0804 00:35:22.269981   21140 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-230158-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-230158-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-230158-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 00:35:22.375879   21140 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 00:35:22.375906   21140 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-3947/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-3947/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-3947/.minikube}
	I0804 00:35:22.375920   21140 buildroot.go:174] setting up certificates
	I0804 00:35:22.375931   21140 provision.go:84] configureAuth start
	I0804 00:35:22.375939   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetMachineName
	I0804 00:35:22.376233   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetIP
	I0804 00:35:22.378696   21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:35:22.379050   21140 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
	I0804 00:35:22.379079   21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:35:22.379206   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHHostname
	I0804 00:35:22.381767   21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:35:22.382211   21140 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
	I0804 00:35:22.382254   21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:35:22.382408   21140 provision.go:143] copyHostCerts
	I0804 00:35:22.382433   21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19364-3947/.minikube/ca.pem
	I0804 00:35:22.382462   21140 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-3947/.minikube/ca.pem, removing ...
	I0804 00:35:22.382469   21140 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-3947/.minikube/ca.pem
	I0804 00:35:22.382538   21140 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-3947/.minikube/ca.pem (1082 bytes)
	I0804 00:35:22.382611   21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19364-3947/.minikube/cert.pem
	I0804 00:35:22.382630   21140 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-3947/.minikube/cert.pem, removing ...
	I0804 00:35:22.382634   21140 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-3947/.minikube/cert.pem
	I0804 00:35:22.382656   21140 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-3947/.minikube/cert.pem (1123 bytes)
	I0804 00:35:22.382696   21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19364-3947/.minikube/key.pem
	I0804 00:35:22.382713   21140 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-3947/.minikube/key.pem, removing ...
	I0804 00:35:22.382720   21140 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-3947/.minikube/key.pem
	I0804 00:35:22.382741   21140 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-3947/.minikube/key.pem (1679 bytes)
	I0804 00:35:22.382788   21140 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-3947/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca-key.pem org=jenkins.ha-230158-m03 san=[127.0.0.1 192.168.39.35 ha-230158-m03 localhost minikube]
	I0804 00:35:22.490503   21140 provision.go:177] copyRemoteCerts
	I0804 00:35:22.490552   21140 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 00:35:22.490574   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHHostname
	I0804 00:35:22.492845   21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:35:22.493117   21140 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
	I0804 00:35:22.493144   21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:35:22.493295   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHPort
	I0804 00:35:22.493500   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHKeyPath
	I0804 00:35:22.493649   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHUsername
	I0804 00:35:22.493783   21140 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m03/id_rsa Username:docker}
	I0804 00:35:22.572548   21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0804 00:35:22.572629   21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 00:35:22.597372   21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0804 00:35:22.597440   21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0804 00:35:22.622258   21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0804 00:35:22.622321   21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0804 00:35:22.646173   21140 provision.go:87] duration metric: took 270.230572ms to configureAuth
	I0804 00:35:22.646200   21140 buildroot.go:189] setting minikube options for container-runtime
	I0804 00:35:22.646432   21140 config.go:182] Loaded profile config "ha-230158": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0804 00:35:22.646456   21140 main.go:141] libmachine: (ha-230158-m03) Calling .DriverName
	I0804 00:35:22.646743   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHHostname
	I0804 00:35:22.649357   21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:35:22.649778   21140 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
	I0804 00:35:22.649807   21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:35:22.649974   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHPort
	I0804 00:35:22.650150   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHKeyPath
	I0804 00:35:22.650343   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHKeyPath
	I0804 00:35:22.650467   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHUsername
	I0804 00:35:22.650598   21140 main.go:141] libmachine: Using SSH client type: native
	I0804 00:35:22.650751   21140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.35 22 <nil> <nil>}
	I0804 00:35:22.650761   21140 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0804 00:35:22.752393   21140 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0804 00:35:22.752413   21140 buildroot.go:70] root file system type: tmpfs
	I0804 00:35:22.752526   21140 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0804 00:35:22.752547   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHHostname
	I0804 00:35:22.755378   21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:35:22.755730   21140 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
	I0804 00:35:22.755755   21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:35:22.755890   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHPort
	I0804 00:35:22.756069   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHKeyPath
	I0804 00:35:22.756225   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHKeyPath
	I0804 00:35:22.756364   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHUsername
	I0804 00:35:22.756544   21140 main.go:141] libmachine: Using SSH client type: native
	I0804 00:35:22.756691   21140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.35 22 <nil> <nil>}
	I0804 00:35:22.756751   21140 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.39.132"
	Environment="NO_PROXY=192.168.39.132,192.168.39.188"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0804 00:35:22.870196   21140 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.39.132
	Environment=NO_PROXY=192.168.39.132,192.168.39.188
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0804 00:35:22.870256   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHHostname
	I0804 00:35:22.872892   21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:35:22.873134   21140 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
	I0804 00:35:22.873163   21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:35:22.873347   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHPort
	I0804 00:35:22.873561   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHKeyPath
	I0804 00:35:22.873716   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHKeyPath
	I0804 00:35:22.873866   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHUsername
	I0804 00:35:22.874030   21140 main.go:141] libmachine: Using SSH client type: native
	I0804 00:35:22.874250   21140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.35 22 <nil> <nil>}
	I0804 00:35:22.874280   21140 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0804 00:35:24.662565   21140 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0804 00:35:24.662588   21140 main.go:141] libmachine: Checking connection to Docker...
	I0804 00:35:24.662597   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetURL
	I0804 00:35:24.663821   21140 main.go:141] libmachine: (ha-230158-m03) DBG | Using libvirt version 6000000
	I0804 00:35:24.666698   21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:35:24.667250   21140 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
	I0804 00:35:24.667290   21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:35:24.667530   21140 main.go:141] libmachine: Docker is up and running!
	I0804 00:35:24.667549   21140 main.go:141] libmachine: Reticulating splines...
	I0804 00:35:24.667555   21140 client.go:171] duration metric: took 22.937060688s to LocalClient.Create
	I0804 00:35:24.667576   21140 start.go:167] duration metric: took 22.937122865s to libmachine.API.Create "ha-230158"
	I0804 00:35:24.667585   21140 start.go:293] postStartSetup for "ha-230158-m03" (driver="kvm2")
	I0804 00:35:24.667593   21140 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 00:35:24.667611   21140 main.go:141] libmachine: (ha-230158-m03) Calling .DriverName
	I0804 00:35:24.667873   21140 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 00:35:24.667898   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHHostname
	I0804 00:35:24.670379   21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:35:24.670827   21140 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
	I0804 00:35:24.670854   21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:35:24.671038   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHPort
	I0804 00:35:24.671209   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHKeyPath
	I0804 00:35:24.671380   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHUsername
	I0804 00:35:24.671528   21140 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m03/id_rsa Username:docker}
	I0804 00:35:24.760389   21140 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 00:35:24.769270   21140 info.go:137] Remote host: Buildroot 2023.02.9
	I0804 00:35:24.769294   21140 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-3947/.minikube/addons for local assets ...
	I0804 00:35:24.769356   21140 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-3947/.minikube/files for local assets ...
	I0804 00:35:24.769458   21140 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-3947/.minikube/files/etc/ssl/certs/111362.pem -> 111362.pem in /etc/ssl/certs
	I0804 00:35:24.769470   21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/files/etc/ssl/certs/111362.pem -> /etc/ssl/certs/111362.pem
	I0804 00:35:24.769564   21140 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0804 00:35:24.783201   21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/files/etc/ssl/certs/111362.pem --> /etc/ssl/certs/111362.pem (1708 bytes)
	I0804 00:35:24.806787   21140 start.go:296] duration metric: took 139.191095ms for postStartSetup
	I0804 00:35:24.806880   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetConfigRaw
	I0804 00:35:24.807425   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetIP
	I0804 00:35:24.810032   21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:35:24.810420   21140 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
	I0804 00:35:24.810442   21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:35:24.810659   21140 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/config.json ...
	I0804 00:35:24.810959   21140 start.go:128] duration metric: took 23.099032096s to createHost
	I0804 00:35:24.810982   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHHostname
	I0804 00:35:24.813730   21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:35:24.814183   21140 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
	I0804 00:35:24.814207   21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:35:24.814410   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHPort
	I0804 00:35:24.814594   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHKeyPath
	I0804 00:35:24.814795   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHKeyPath
	I0804 00:35:24.814975   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHUsername
	I0804 00:35:24.815165   21140 main.go:141] libmachine: Using SSH client type: native
	I0804 00:35:24.815390   21140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.35 22 <nil> <nil>}
	I0804 00:35:24.815405   21140 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0804 00:35:24.918593   21140 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722731724.892479961
	
	I0804 00:35:24.918614   21140 fix.go:216] guest clock: 1722731724.892479961
	I0804 00:35:24.918624   21140 fix.go:229] Guest: 2024-08-04 00:35:24.892479961 +0000 UTC Remote: 2024-08-04 00:35:24.810971632 +0000 UTC m=+173.988054035 (delta=81.508329ms)
	I0804 00:35:24.918642   21140 fix.go:200] guest clock delta is within tolerance: 81.508329ms
	I0804 00:35:24.918647   21140 start.go:83] releasing machines lock for "ha-230158-m03", held for 23.206854929s
	I0804 00:35:24.918663   21140 main.go:141] libmachine: (ha-230158-m03) Calling .DriverName
	I0804 00:35:24.918886   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetIP
	I0804 00:35:24.921314   21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:35:24.921811   21140 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
	I0804 00:35:24.921841   21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:35:24.923754   21140 out.go:177] * Found network options:
	I0804 00:35:24.924902   21140 out.go:177]   - NO_PROXY=192.168.39.132,192.168.39.188
	W0804 00:35:24.925923   21140 proxy.go:119] fail to check proxy env: Error ip not in block
	W0804 00:35:24.925944   21140 proxy.go:119] fail to check proxy env: Error ip not in block
	I0804 00:35:24.925955   21140 main.go:141] libmachine: (ha-230158-m03) Calling .DriverName
	I0804 00:35:24.926479   21140 main.go:141] libmachine: (ha-230158-m03) Calling .DriverName
	I0804 00:35:24.926667   21140 main.go:141] libmachine: (ha-230158-m03) Calling .DriverName
	I0804 00:35:24.926757   21140 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 00:35:24.926796   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHHostname
	W0804 00:35:24.926816   21140 proxy.go:119] fail to check proxy env: Error ip not in block
	W0804 00:35:24.926838   21140 proxy.go:119] fail to check proxy env: Error ip not in block
	I0804 00:35:24.926896   21140 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0804 00:35:24.926913   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHHostname
	I0804 00:35:24.929582   21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:35:24.929653   21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:35:24.929952   21140 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
	I0804 00:35:24.929977   21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:35:24.930004   21140 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
	I0804 00:35:24.930020   21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:35:24.930116   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHPort
	I0804 00:35:24.930210   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHPort
	I0804 00:35:24.930328   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHKeyPath
	I0804 00:35:24.930396   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHKeyPath
	I0804 00:35:24.930458   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHUsername
	I0804 00:35:24.930539   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHUsername
	I0804 00:35:24.930611   21140 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m03/id_rsa Username:docker}
	I0804 00:35:24.930658   21140 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m03/id_rsa Username:docker}
	W0804 00:35:25.025703   21140 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0804 00:35:25.025786   21140 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 00:35:25.044692   21140 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0804 00:35:25.044716   21140 start.go:495] detecting cgroup driver to use...
	I0804 00:35:25.044822   21140 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 00:35:25.064747   21140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0804 00:35:25.076847   21140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0804 00:35:25.089218   21140 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0804 00:35:25.089297   21140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0804 00:35:25.102072   21140 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0804 00:35:25.114924   21140 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0804 00:35:25.127305   21140 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0804 00:35:25.139446   21140 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 00:35:25.152031   21140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0804 00:35:25.165204   21140 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0804 00:35:25.177111   21140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0804 00:35:25.188400   21140 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 00:35:25.198087   21140 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 00:35:25.208382   21140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:35:25.321490   21140 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0804 00:35:25.348001   21140 start.go:495] detecting cgroup driver to use...
	I0804 00:35:25.348071   21140 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0804 00:35:25.365037   21140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0804 00:35:25.379611   21140 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0804 00:35:25.399009   21140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0804 00:35:25.412403   21140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0804 00:35:25.425550   21140 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0804 00:35:25.457165   21140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0804 00:35:25.472279   21140 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 00:35:25.490577   21140 ssh_runner.go:195] Run: which cri-dockerd
	I0804 00:35:25.494212   21140 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0804 00:35:25.503793   21140 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0804 00:35:25.520119   21140 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0804 00:35:25.631730   21140 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0804 00:35:25.748311   21140 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0804 00:35:25.748356   21140 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0804 00:35:25.765922   21140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:35:25.887983   21140 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0804 00:35:28.267711   21140 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.379695689s)
	I0804 00:35:28.267783   21140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0804 00:35:28.280799   21140 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0804 00:35:28.297004   21140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0804 00:35:28.309602   21140 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0804 00:35:28.421120   21140 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0804 00:35:28.545022   21140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:35:28.673591   21140 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0804 00:35:28.691136   21140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0804 00:35:28.704181   21140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:35:28.819323   21140 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0804 00:35:28.911154   21140 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0804 00:35:28.911254   21140 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0804 00:35:28.916801   21140 start.go:563] Will wait 60s for crictl version
	I0804 00:35:28.916847   21140 ssh_runner.go:195] Run: which crictl
	I0804 00:35:28.920890   21140 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 00:35:28.957669   21140 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0804 00:35:28.957733   21140 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0804 00:35:28.988116   21140 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0804 00:35:29.014642   21140 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0804 00:35:29.015880   21140 out.go:177]   - env NO_PROXY=192.168.39.132
	I0804 00:35:29.017062   21140 out.go:177]   - env NO_PROXY=192.168.39.132,192.168.39.188
	I0804 00:35:29.018490   21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetIP
	I0804 00:35:29.021070   21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:35:29.021419   21140 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
	I0804 00:35:29.021442   21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:35:29.021716   21140 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0804 00:35:29.025837   21140 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:35:29.038464   21140 mustload.go:65] Loading cluster: ha-230158
	I0804 00:35:29.038684   21140 config.go:182] Loaded profile config "ha-230158": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0804 00:35:29.038925   21140 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:35:29.038959   21140 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:35:29.053933   21140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42867
	I0804 00:35:29.054405   21140 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:35:29.054897   21140 main.go:141] libmachine: Using API Version  1
	I0804 00:35:29.054914   21140 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:35:29.055243   21140 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:35:29.055464   21140 main.go:141] libmachine: (ha-230158) Calling .GetState
	I0804 00:35:29.056933   21140 host.go:66] Checking if "ha-230158" exists ...
	I0804 00:35:29.057254   21140 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:35:29.057302   21140 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:35:29.071693   21140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37685
	I0804 00:35:29.072063   21140 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:35:29.072507   21140 main.go:141] libmachine: Using API Version  1
	I0804 00:35:29.072528   21140 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:35:29.072818   21140 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:35:29.073053   21140 main.go:141] libmachine: (ha-230158) Calling .DriverName
	I0804 00:35:29.073265   21140 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158 for IP: 192.168.39.35
	I0804 00:35:29.073276   21140 certs.go:194] generating shared ca certs ...
	I0804 00:35:29.073300   21140 certs.go:226] acquiring lock for ca certs: {Name:mkffa482a260ec35b4e7e61a9f84c11349615c10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:35:29.073423   21140 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-3947/.minikube/ca.key
	I0804 00:35:29.073476   21140 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-3947/.minikube/proxy-client-ca.key
	I0804 00:35:29.073489   21140 certs.go:256] generating profile certs ...
	I0804 00:35:29.073578   21140 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/client.key
	I0804 00:35:29.073611   21140 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.key.07a968ef
	I0804 00:35:29.073632   21140 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.crt.07a968ef with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.132 192.168.39.188 192.168.39.35 192.168.39.254]
	I0804 00:35:29.192480   21140 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.crt.07a968ef ...
	I0804 00:35:29.192511   21140 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.crt.07a968ef: {Name:mkace921321134e2c31957acee1a1e7265efc015 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:35:29.192690   21140 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.key.07a968ef ...
	I0804 00:35:29.192705   21140 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.key.07a968ef: {Name:mkb8a8c865fe06663f3162fa98a89ba246d74f9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:35:29.192818   21140 certs.go:381] copying /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.crt.07a968ef -> /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.crt
	I0804 00:35:29.192972   21140 certs.go:385] copying /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.key.07a968ef -> /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.key
	I0804 00:35:29.193092   21140 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/proxy-client.key
	I0804 00:35:29.193106   21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0804 00:35:29.193120   21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0804 00:35:29.193133   21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0804 00:35:29.193146   21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0804 00:35:29.193159   21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0804 00:35:29.193171   21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0804 00:35:29.193183   21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0804 00:35:29.193194   21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0804 00:35:29.193239   21140 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/11136.pem (1338 bytes)
	W0804 00:35:29.193267   21140 certs.go:480] ignoring /home/jenkins/minikube-integration/19364-3947/.minikube/certs/11136_empty.pem, impossibly tiny 0 bytes
	I0804 00:35:29.193276   21140 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca-key.pem (1679 bytes)
	I0804 00:35:29.193305   21140 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem (1082 bytes)
	I0804 00:35:29.193328   21140 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/cert.pem (1123 bytes)
	I0804 00:35:29.193348   21140 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/key.pem (1679 bytes)
	I0804 00:35:29.193424   21140 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-3947/.minikube/files/etc/ssl/certs/111362.pem (1708 bytes)
	I0804 00:35:29.193453   21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:35:29.193467   21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/11136.pem -> /usr/share/ca-certificates/11136.pem
	I0804 00:35:29.193479   21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/files/etc/ssl/certs/111362.pem -> /usr/share/ca-certificates/111362.pem
	I0804 00:35:29.193506   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHHostname
	I0804 00:35:29.196795   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:35:29.197229   21140 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
	I0804 00:35:29.197254   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:35:29.197484   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHPort
	I0804 00:35:29.197690   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
	I0804 00:35:29.197860   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHUsername
	I0804 00:35:29.198029   21140 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158/id_rsa Username:docker}
	I0804 00:35:29.274602   21140 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0804 00:35:29.280451   21140 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0804 00:35:29.292613   21140 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0804 00:35:29.297292   21140 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0804 00:35:29.308604   21140 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0804 00:35:29.315122   21140 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0804 00:35:29.332152   21140 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0804 00:35:29.337897   21140 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0804 00:35:29.350183   21140 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0804 00:35:29.354439   21140 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0804 00:35:29.366402   21140 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0804 00:35:29.370314   21140 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0804 00:35:29.381004   21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 00:35:29.407430   21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0804 00:35:29.432902   21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 00:35:29.457997   21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0804 00:35:29.482458   21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0804 00:35:29.506034   21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0804 00:35:29.529331   21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 00:35:29.552419   21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0804 00:35:29.576496   21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 00:35:29.600781   21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/certs/11136.pem --> /usr/share/ca-certificates/11136.pem (1338 bytes)
	I0804 00:35:29.623522   21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/files/etc/ssl/certs/111362.pem --> /usr/share/ca-certificates/111362.pem (1708 bytes)
	I0804 00:35:29.646258   21140 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0804 00:35:29.662320   21140 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0804 00:35:29.680498   21140 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0804 00:35:29.697629   21140 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0804 00:35:29.713187   21140 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0804 00:35:29.730685   21140 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0804 00:35:29.747896   21140 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0804 00:35:29.765188   21140 ssh_runner.go:195] Run: openssl version
	I0804 00:35:29.770872   21140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 00:35:29.781638   21140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:35:29.786535   21140 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  4 00:21 /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:35:29.786575   21140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:35:29.792427   21140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 00:35:29.803532   21140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11136.pem && ln -fs /usr/share/ca-certificates/11136.pem /etc/ssl/certs/11136.pem"
	I0804 00:35:29.814595   21140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11136.pem
	I0804 00:35:29.819477   21140 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  4 00:28 /usr/share/ca-certificates/11136.pem
	I0804 00:35:29.819521   21140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11136.pem
	I0804 00:35:29.825367   21140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11136.pem /etc/ssl/certs/51391683.0"
	I0804 00:35:29.836832   21140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111362.pem && ln -fs /usr/share/ca-certificates/111362.pem /etc/ssl/certs/111362.pem"
	I0804 00:35:29.847347   21140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111362.pem
	I0804 00:35:29.851772   21140 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  4 00:28 /usr/share/ca-certificates/111362.pem
	I0804 00:35:29.851818   21140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111362.pem
	I0804 00:35:29.857592   21140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111362.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 00:35:29.868948   21140 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 00:35:29.872798   21140 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0804 00:35:29.872860   21140 kubeadm.go:934] updating node {m03 192.168.39.35 8443 v1.30.3 docker true true} ...
	I0804 00:35:29.872962   21140 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-230158-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.35
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-230158 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 00:35:29.872997   21140 kube-vip.go:115] generating kube-vip config ...
	I0804 00:35:29.873028   21140 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0804 00:35:29.888413   21140 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0804 00:35:29.888496   21140 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0804 00:35:29.888563   21140 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0804 00:35:29.898226   21140 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0804 00:35:29.898292   21140 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0804 00:35:29.907477   21140 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0804 00:35:29.907501   21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0804 00:35:29.907549   21140 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0804 00:35:29.907481   21140 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0804 00:35:29.907583   21140 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0804 00:35:29.907605   21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0804 00:35:29.907596   21140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:35:29.907687   21140 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0804 00:35:29.920969   21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0804 00:35:29.921004   21140 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0804 00:35:29.921030   21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0804 00:35:29.921052   21140 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0804 00:35:29.921052   21140 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0804 00:35:29.921083   21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0804 00:35:29.933235   21140 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0804 00:35:29.933273   21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0804 00:35:30.833563   21140 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0804 00:35:30.843265   21140 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0804 00:35:30.861374   21140 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0804 00:35:30.877889   21140 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0804 00:35:30.894388   21140 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0804 00:35:30.898175   21140 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:35:30.910151   21140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:35:31.026639   21140 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:35:31.050034   21140 host.go:66] Checking if "ha-230158" exists ...
	I0804 00:35:31.050392   21140 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:35:31.050429   21140 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:35:31.065656   21140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33459
	I0804 00:35:31.066112   21140 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:35:31.066671   21140 main.go:141] libmachine: Using API Version  1
	I0804 00:35:31.066692   21140 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:35:31.066998   21140 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:35:31.067201   21140 main.go:141] libmachine: (ha-230158) Calling .DriverName
	I0804 00:35:31.067363   21140 start.go:317] joinCluster: &{Name:ha-230158 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-230158 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.132 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.188 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.35 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns
:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:35:31.067533   21140 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0804 00:35:31.067557   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHHostname
	I0804 00:35:31.070882   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:35:31.071389   21140 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
	I0804 00:35:31.071417   21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:35:31.071587   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHPort
	I0804 00:35:31.071755   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
	I0804 00:35:31.071920   21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHUsername
	I0804 00:35:31.072074   21140 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158/id_rsa Username:docker}
	I0804 00:35:31.263029   21140 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.35 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0804 00:35:31.263071   21140 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mf76a4.t8pat0uzu8mjr998 --discovery-token-ca-cert-hash sha256:df45234da77be7664dbc18ef5748e1cb4d47aa47bd0026b7b7a8eef37767d0f0 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-230158-m03 --control-plane --apiserver-advertise-address=192.168.39.35 --apiserver-bind-port=8443"
	I0804 00:35:55.597900   21140 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mf76a4.t8pat0uzu8mjr998 --discovery-token-ca-cert-hash sha256:df45234da77be7664dbc18ef5748e1cb4d47aa47bd0026b7b7a8eef37767d0f0 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-230158-m03 --control-plane --apiserver-advertise-address=192.168.39.35 --apiserver-bind-port=8443": (24.334796454s)
	I0804 00:35:55.597941   21140 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0804 00:35:56.245827   21140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-230158-m03 minikube.k8s.io/updated_at=2024_08_04T00_35_56_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=40edfa2c305b426121bc436e9f593c10662235e6 minikube.k8s.io/name=ha-230158 minikube.k8s.io/primary=false
	I0804 00:35:56.401002   21140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-230158-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0804 00:35:56.514911   21140 start.go:319] duration metric: took 25.447543043s to joinCluster
	I0804 00:35:56.514984   21140 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.35 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0804 00:35:56.515219   21140 config.go:182] Loaded profile config "ha-230158": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0804 00:35:56.516163   21140 out.go:177] * Verifying Kubernetes components...
	I0804 00:35:56.517473   21140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:35:56.792729   21140 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:35:56.812319   21140 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19364-3947/kubeconfig
	I0804 00:35:56.812567   21140 kapi.go:59] client config for ha-230158: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/client.crt", KeyFile:"/home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/client.key", CAFile:"/home/jenkins/minikube-integration/19364-3947/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0804 00:35:56.812625   21140 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.132:8443
	I0804 00:35:56.812837   21140 node_ready.go:35] waiting up to 6m0s for node "ha-230158-m03" to be "Ready" ...
	I0804 00:35:56.812921   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
	I0804 00:35:56.812931   21140 round_trippers.go:469] Request Headers:
	I0804 00:35:56.812942   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:35:56.812951   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:35:56.822186   21140 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0804 00:35:57.313604   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
	I0804 00:35:57.313624   21140 round_trippers.go:469] Request Headers:
	I0804 00:35:57.313634   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:35:57.313642   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:35:57.316816   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:35:57.813650   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
	I0804 00:35:57.813681   21140 round_trippers.go:469] Request Headers:
	I0804 00:35:57.813717   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:35:57.813728   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:35:57.817575   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:35:58.313029   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
	I0804 00:35:58.313053   21140 round_trippers.go:469] Request Headers:
	I0804 00:35:58.313063   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:35:58.313069   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:35:58.317051   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:35:58.813933   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
	I0804 00:35:58.813954   21140 round_trippers.go:469] Request Headers:
	I0804 00:35:58.813962   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:35:58.813966   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:35:58.817153   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:35:58.817626   21140 node_ready.go:53] node "ha-230158-m03" has status "Ready":"False"
	I0804 00:35:59.313955   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
	I0804 00:35:59.313984   21140 round_trippers.go:469] Request Headers:
	I0804 00:35:59.313995   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:35:59.314002   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:35:59.320217   21140 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0804 00:35:59.813653   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
	I0804 00:35:59.813672   21140 round_trippers.go:469] Request Headers:
	I0804 00:35:59.813683   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:35:59.813691   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:35:59.818951   21140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0804 00:36:00.313942   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
	I0804 00:36:00.313967   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:00.313979   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:00.313984   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:00.317522   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:36:00.813432   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
	I0804 00:36:00.813454   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:00.813464   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:00.813468   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:00.819638   21140 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0804 00:36:00.820329   21140 node_ready.go:53] node "ha-230158-m03" has status "Ready":"False"
	I0804 00:36:01.313898   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
	I0804 00:36:01.313921   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:01.313929   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:01.313933   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:01.317348   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:36:01.813889   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
	I0804 00:36:01.813917   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:01.813929   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:01.813936   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:01.817472   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:36:02.313397   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
	I0804 00:36:02.313418   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:02.313428   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:02.313433   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:02.317733   21140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0804 00:36:02.813698   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
	I0804 00:36:02.813719   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:02.813736   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:02.813742   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:02.816874   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:36:03.313553   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
	I0804 00:36:03.313581   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:03.313588   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:03.313592   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:03.316863   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:36:03.317654   21140 node_ready.go:53] node "ha-230158-m03" has status "Ready":"False"
	I0804 00:36:03.813066   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
	I0804 00:36:03.813088   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:03.813095   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:03.813098   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:03.816776   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:36:04.313754   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
	I0804 00:36:04.313776   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:04.313784   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:04.313788   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:04.317202   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:36:04.813607   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
	I0804 00:36:04.813634   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:04.813645   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:04.813656   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:04.817037   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:36:05.313378   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
	I0804 00:36:05.313400   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:05.313408   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:05.313413   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:05.317269   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:36:05.317787   21140 node_ready.go:53] node "ha-230158-m03" has status "Ready":"False"
	I0804 00:36:05.813093   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
	I0804 00:36:05.813116   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:05.813124   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:05.813127   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:05.816797   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:36:06.313024   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
	I0804 00:36:06.313045   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:06.313054   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:06.313060   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:06.316202   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:36:06.813564   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
	I0804 00:36:06.813585   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:06.813596   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:06.813600   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:06.817030   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:36:07.313010   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
	I0804 00:36:07.313036   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:07.313046   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:07.313051   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:07.316498   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:36:07.813775   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
	I0804 00:36:07.813796   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:07.813802   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:07.813809   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:07.817261   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:36:07.818039   21140 node_ready.go:53] node "ha-230158-m03" has status "Ready":"False"
	I0804 00:36:08.313869   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
	I0804 00:36:08.313890   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:08.313901   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:08.313905   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:08.317961   21140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0804 00:36:08.813891   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
	I0804 00:36:08.813912   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:08.813920   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:08.813925   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:08.817500   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:36:09.313373   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
	I0804 00:36:09.313395   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:09.313402   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:09.313407   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:09.316594   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:36:09.813936   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
	I0804 00:36:09.813955   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:09.813962   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:09.813967   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:09.817354   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:36:10.313416   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
	I0804 00:36:10.313443   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:10.313451   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:10.313455   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:10.316728   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:36:10.317273   21140 node_ready.go:53] node "ha-230158-m03" has status "Ready":"False"
	I0804 00:36:10.813608   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
	I0804 00:36:10.813628   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:10.813635   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:10.813642   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:10.816719   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:36:11.313687   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
	I0804 00:36:11.313766   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:11.313787   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:11.313801   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:11.317800   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:36:11.813156   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
	I0804 00:36:11.813176   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:11.813184   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:11.813187   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:11.816530   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:36:12.313362   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
	I0804 00:36:12.313384   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:12.313393   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:12.313397   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:12.316689   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:36:12.317653   21140 node_ready.go:53] node "ha-230158-m03" has status "Ready":"False"
	I0804 00:36:12.813988   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
	I0804 00:36:12.814026   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:12.814037   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:12.814043   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:12.817685   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:36:13.313096   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
	I0804 00:36:13.313121   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:13.313132   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:13.313139   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:13.316170   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:36:13.813487   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
	I0804 00:36:13.813507   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:13.813519   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:13.813522   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:13.817165   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:36:14.313242   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
	I0804 00:36:14.313271   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:14.313283   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:14.313291   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:14.316695   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:36:14.317572   21140 node_ready.go:49] node "ha-230158-m03" has status "Ready":"True"
	I0804 00:36:14.317594   21140 node_ready.go:38] duration metric: took 17.504738279s for node "ha-230158-m03" to be "Ready" ...
	I0804 00:36:14.317604   21140 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:36:14.317669   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods
	I0804 00:36:14.317682   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:14.317689   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:14.317693   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:14.324294   21140 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0804 00:36:14.330984   21140 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-cqbjc" in "kube-system" namespace to be "Ready" ...
	I0804 00:36:14.331072   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-cqbjc
	I0804 00:36:14.331083   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:14.331089   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:14.331094   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:14.334136   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:36:14.335282   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158
	I0804 00:36:14.335301   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:14.335312   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:14.335319   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:14.338942   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:36:14.339726   21140 pod_ready.go:92] pod "coredns-7db6d8ff4d-cqbjc" in "kube-system" namespace has status "Ready":"True"
	I0804 00:36:14.339743   21140 pod_ready.go:81] duration metric: took 8.732646ms for pod "coredns-7db6d8ff4d-cqbjc" in "kube-system" namespace to be "Ready" ...
	I0804 00:36:14.339752   21140 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-xt2gb" in "kube-system" namespace to be "Ready" ...
	I0804 00:36:14.339795   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-xt2gb
	I0804 00:36:14.339803   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:14.339809   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:14.339813   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:14.342514   21140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0804 00:36:14.343458   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158
	I0804 00:36:14.343472   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:14.343479   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:14.343483   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:14.345977   21140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0804 00:36:14.346711   21140 pod_ready.go:92] pod "coredns-7db6d8ff4d-xt2gb" in "kube-system" namespace has status "Ready":"True"
	I0804 00:36:14.346729   21140 pod_ready.go:81] duration metric: took 6.970575ms for pod "coredns-7db6d8ff4d-xt2gb" in "kube-system" namespace to be "Ready" ...
	I0804 00:36:14.346738   21140 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-230158" in "kube-system" namespace to be "Ready" ...
	I0804 00:36:14.346793   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/etcd-ha-230158
	I0804 00:36:14.346803   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:14.346814   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:14.346822   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:14.349116   21140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0804 00:36:14.349889   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158
	I0804 00:36:14.349903   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:14.349912   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:14.349918   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:14.352287   21140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0804 00:36:14.352752   21140 pod_ready.go:92] pod "etcd-ha-230158" in "kube-system" namespace has status "Ready":"True"
	I0804 00:36:14.352768   21140 pod_ready.go:81] duration metric: took 6.022837ms for pod "etcd-ha-230158" in "kube-system" namespace to be "Ready" ...
	I0804 00:36:14.352776   21140 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-230158-m02" in "kube-system" namespace to be "Ready" ...
	I0804 00:36:14.352823   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/etcd-ha-230158-m02
	I0804 00:36:14.352833   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:14.352840   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:14.352845   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:14.355251   21140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0804 00:36:14.356139   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
	I0804 00:36:14.356154   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:14.356162   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:14.356168   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:14.358804   21140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0804 00:36:14.359221   21140 pod_ready.go:92] pod "etcd-ha-230158-m02" in "kube-system" namespace has status "Ready":"True"
	I0804 00:36:14.359236   21140 pod_ready.go:81] duration metric: took 6.450652ms for pod "etcd-ha-230158-m02" in "kube-system" namespace to be "Ready" ...
	I0804 00:36:14.359246   21140 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-230158-m03" in "kube-system" namespace to be "Ready" ...
	I0804 00:36:14.513671   21140 request.go:629] Waited for 154.368864ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/etcd-ha-230158-m03
	I0804 00:36:14.513765   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/etcd-ha-230158-m03
	I0804 00:36:14.513774   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:14.513794   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:14.513811   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:14.517308   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:36:14.713219   21140 request.go:629] Waited for 195.282606ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
	I0804 00:36:14.713271   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
	I0804 00:36:14.713278   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:14.713287   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:14.713292   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:14.717115   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:36:14.717626   21140 pod_ready.go:92] pod "etcd-ha-230158-m03" in "kube-system" namespace has status "Ready":"True"
	I0804 00:36:14.717649   21140 pod_ready.go:81] duration metric: took 358.394373ms for pod "etcd-ha-230158-m03" in "kube-system" namespace to be "Ready" ...
	I0804 00:36:14.717671   21140 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-230158" in "kube-system" namespace to be "Ready" ...
	I0804 00:36:14.913544   21140 request.go:629] Waited for 195.774852ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-230158
	I0804 00:36:14.913606   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-230158
	I0804 00:36:14.913611   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:14.913620   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:14.913627   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:14.916592   21140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0804 00:36:15.113797   21140 request.go:629] Waited for 196.366235ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-230158
	I0804 00:36:15.113873   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158
	I0804 00:36:15.113880   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:15.113890   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:15.113897   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:15.117750   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:36:15.118590   21140 pod_ready.go:92] pod "kube-apiserver-ha-230158" in "kube-system" namespace has status "Ready":"True"
	I0804 00:36:15.118608   21140 pod_ready.go:81] duration metric: took 400.926261ms for pod "kube-apiserver-ha-230158" in "kube-system" namespace to be "Ready" ...
	I0804 00:36:15.118618   21140 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-230158-m02" in "kube-system" namespace to be "Ready" ...
	I0804 00:36:15.313717   21140 request.go:629] Waited for 195.03725ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-230158-m02
	I0804 00:36:15.313792   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-230158-m02
	I0804 00:36:15.313797   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:15.313805   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:15.313808   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:15.317077   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:36:15.513361   21140 request.go:629] Waited for 195.280512ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
	I0804 00:36:15.513441   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
	I0804 00:36:15.513458   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:15.513471   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:15.513485   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:15.517319   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:36:15.517959   21140 pod_ready.go:92] pod "kube-apiserver-ha-230158-m02" in "kube-system" namespace has status "Ready":"True"
	I0804 00:36:15.517975   21140 pod_ready.go:81] duration metric: took 399.350403ms for pod "kube-apiserver-ha-230158-m02" in "kube-system" namespace to be "Ready" ...
	I0804 00:36:15.517987   21140 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-230158-m03" in "kube-system" namespace to be "Ready" ...
	I0804 00:36:15.714166   21140 request.go:629] Waited for 196.119755ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-230158-m03
	I0804 00:36:15.714246   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-230158-m03
	I0804 00:36:15.714254   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:15.714269   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:15.714277   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:15.717553   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:36:15.913485   21140 request.go:629] Waited for 195.02327ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
	I0804 00:36:15.913563   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
	I0804 00:36:15.913572   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:15.913584   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:15.913595   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:15.916620   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:36:15.917187   21140 pod_ready.go:92] pod "kube-apiserver-ha-230158-m03" in "kube-system" namespace has status "Ready":"True"
	I0804 00:36:15.917207   21140 pod_ready.go:81] duration metric: took 399.213201ms for pod "kube-apiserver-ha-230158-m03" in "kube-system" namespace to be "Ready" ...
	I0804 00:36:15.917217   21140 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-230158" in "kube-system" namespace to be "Ready" ...
	I0804 00:36:16.113928   21140 request.go:629] Waited for 196.6406ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-230158
	I0804 00:36:16.114040   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-230158
	I0804 00:36:16.114055   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:16.114064   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:16.114074   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:16.118199   21140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0804 00:36:16.314136   21140 request.go:629] Waited for 193.357767ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-230158
	I0804 00:36:16.314194   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158
	I0804 00:36:16.314199   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:16.314207   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:16.314211   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:16.317233   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:36:16.318043   21140 pod_ready.go:92] pod "kube-controller-manager-ha-230158" in "kube-system" namespace has status "Ready":"True"
	I0804 00:36:16.318063   21140 pod_ready.go:81] duration metric: took 400.838103ms for pod "kube-controller-manager-ha-230158" in "kube-system" namespace to be "Ready" ...
	I0804 00:36:16.318077   21140 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-230158-m02" in "kube-system" namespace to be "Ready" ...
	I0804 00:36:16.514190   21140 request.go:629] Waited for 196.049158ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-230158-m02
	I0804 00:36:16.514284   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-230158-m02
	I0804 00:36:16.514291   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:16.514299   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:16.514307   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:16.517440   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:36:16.713372   21140 request.go:629] Waited for 195.27709ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
	I0804 00:36:16.713422   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
	I0804 00:36:16.713428   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:16.713459   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:16.713467   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:16.717134   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:36:16.717887   21140 pod_ready.go:92] pod "kube-controller-manager-ha-230158-m02" in "kube-system" namespace has status "Ready":"True"
	I0804 00:36:16.717903   21140 pod_ready.go:81] duration metric: took 399.816963ms for pod "kube-controller-manager-ha-230158-m02" in "kube-system" namespace to be "Ready" ...
	I0804 00:36:16.717913   21140 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-230158-m03" in "kube-system" namespace to be "Ready" ...
	I0804 00:36:16.913366   21140 request.go:629] Waited for 195.375288ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-230158-m03
	I0804 00:36:16.913421   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-230158-m03
	I0804 00:36:16.913427   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:16.913434   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:16.913452   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:16.917008   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:36:17.114002   21140 request.go:629] Waited for 196.360087ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
	I0804 00:36:17.114062   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
	I0804 00:36:17.114083   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:17.114094   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:17.114099   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:17.118060   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:36:17.118868   21140 pod_ready.go:92] pod "kube-controller-manager-ha-230158-m03" in "kube-system" namespace has status "Ready":"True"
	I0804 00:36:17.118889   21140 pod_ready.go:81] duration metric: took 400.967735ms for pod "kube-controller-manager-ha-230158-m03" in "kube-system" namespace to be "Ready" ...
	I0804 00:36:17.118898   21140 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8tgp2" in "kube-system" namespace to be "Ready" ...
	I0804 00:36:17.313818   21140 request.go:629] Waited for 194.852885ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8tgp2
	I0804 00:36:17.313892   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8tgp2
	I0804 00:36:17.313903   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:17.313914   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:17.313926   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:17.317347   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:36:17.513381   21140 request.go:629] Waited for 195.279495ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
	I0804 00:36:17.513450   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
	I0804 00:36:17.513455   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:17.513463   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:17.513466   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:17.517059   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:36:17.517804   21140 pod_ready.go:92] pod "kube-proxy-8tgp2" in "kube-system" namespace has status "Ready":"True"
	I0804 00:36:17.517823   21140 pod_ready.go:81] duration metric: took 398.918885ms for pod "kube-proxy-8tgp2" in "kube-system" namespace to be "Ready" ...
	I0804 00:36:17.517832   21140 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-llxx2" in "kube-system" namespace to be "Ready" ...
	I0804 00:36:17.713990   21140 request.go:629] Waited for 196.084751ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-proxy-llxx2
	I0804 00:36:17.714051   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-proxy-llxx2
	I0804 00:36:17.714058   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:17.714067   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:17.714072   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:17.717314   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:36:17.913369   21140 request.go:629] Waited for 195.291585ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
	I0804 00:36:17.913427   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
	I0804 00:36:17.913432   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:17.913452   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:17.913459   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:17.917761   21140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0804 00:36:17.918331   21140 pod_ready.go:92] pod "kube-proxy-llxx2" in "kube-system" namespace has status "Ready":"True"
	I0804 00:36:17.918350   21140 pod_ready.go:81] duration metric: took 400.511651ms for pod "kube-proxy-llxx2" in "kube-system" namespace to be "Ready" ...
	I0804 00:36:17.918358   21140 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vdn92" in "kube-system" namespace to be "Ready" ...
	I0804 00:36:18.113862   21140 request.go:629] Waited for 195.443141ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vdn92
	I0804 00:36:18.113931   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vdn92
	I0804 00:36:18.113947   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:18.113959   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:18.113967   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:18.118260   21140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0804 00:36:18.313934   21140 request.go:629] Waited for 194.230466ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-230158
	I0804 00:36:18.313994   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158
	I0804 00:36:18.314001   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:18.314017   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:18.314030   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:18.317513   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:36:18.318372   21140 pod_ready.go:92] pod "kube-proxy-vdn92" in "kube-system" namespace has status "Ready":"True"
	I0804 00:36:18.318391   21140 pod_ready.go:81] duration metric: took 400.027057ms for pod "kube-proxy-vdn92" in "kube-system" namespace to be "Ready" ...
	I0804 00:36:18.318402   21140 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-230158" in "kube-system" namespace to be "Ready" ...
	I0804 00:36:18.513379   21140 request.go:629] Waited for 194.888882ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-230158
	I0804 00:36:18.513443   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-230158
	I0804 00:36:18.513452   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:18.513461   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:18.513470   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:18.516837   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:36:18.714010   21140 request.go:629] Waited for 196.366502ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-230158
	I0804 00:36:18.714127   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158
	I0804 00:36:18.714142   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:18.714152   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:18.714161   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:18.718093   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:36:18.718711   21140 pod_ready.go:92] pod "kube-scheduler-ha-230158" in "kube-system" namespace has status "Ready":"True"
	I0804 00:36:18.718732   21140 pod_ready.go:81] duration metric: took 400.322513ms for pod "kube-scheduler-ha-230158" in "kube-system" namespace to be "Ready" ...
	I0804 00:36:18.718744   21140 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-230158-m02" in "kube-system" namespace to be "Ready" ...
	I0804 00:36:18.913928   21140 request.go:629] Waited for 195.096761ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-230158-m02
	I0804 00:36:18.913992   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-230158-m02
	I0804 00:36:18.913998   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:18.914006   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:18.914012   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:18.917481   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:36:19.114007   21140 request.go:629] Waited for 195.769588ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
	I0804 00:36:19.114057   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
	I0804 00:36:19.114062   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:19.114070   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:19.114074   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:19.117807   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:36:19.118355   21140 pod_ready.go:92] pod "kube-scheduler-ha-230158-m02" in "kube-system" namespace has status "Ready":"True"
	I0804 00:36:19.118372   21140 pod_ready.go:81] duration metric: took 399.621886ms for pod "kube-scheduler-ha-230158-m02" in "kube-system" namespace to be "Ready" ...
	I0804 00:36:19.118382   21140 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-230158-m03" in "kube-system" namespace to be "Ready" ...
	I0804 00:36:19.313596   21140 request.go:629] Waited for 195.149418ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-230158-m03
	I0804 00:36:19.313674   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-230158-m03
	I0804 00:36:19.313680   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:19.313687   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:19.313691   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:19.317150   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:36:19.514026   21140 request.go:629] Waited for 196.255241ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
	I0804 00:36:19.514116   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
	I0804 00:36:19.514126   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:19.514134   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:19.514137   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:19.517549   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:36:19.518084   21140 pod_ready.go:92] pod "kube-scheduler-ha-230158-m03" in "kube-system" namespace has status "Ready":"True"
	I0804 00:36:19.518102   21140 pod_ready.go:81] duration metric: took 399.712625ms for pod "kube-scheduler-ha-230158-m03" in "kube-system" namespace to be "Ready" ...
	I0804 00:36:19.518112   21140 pod_ready.go:38] duration metric: took 5.20049857s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:36:19.518128   21140 api_server.go:52] waiting for apiserver process to appear ...
	I0804 00:36:19.518177   21140 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:36:19.535013   21140 api_server.go:72] duration metric: took 23.019993564s to wait for apiserver process to appear ...
	I0804 00:36:19.535039   21140 api_server.go:88] waiting for apiserver healthz status ...
	I0804 00:36:19.535059   21140 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8443/healthz ...
	I0804 00:36:19.545694   21140 api_server.go:279] https://192.168.39.132:8443/healthz returned 200:
	ok
	I0804 00:36:19.545771   21140 round_trippers.go:463] GET https://192.168.39.132:8443/version
	I0804 00:36:19.545782   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:19.545792   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:19.545799   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:19.546739   21140 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0804 00:36:19.546810   21140 api_server.go:141] control plane version: v1.30.3
	I0804 00:36:19.546827   21140 api_server.go:131] duration metric: took 11.780862ms to wait for apiserver health ...
	I0804 00:36:19.546837   21140 system_pods.go:43] waiting for kube-system pods to appear ...
	I0804 00:36:19.714166   21140 request.go:629] Waited for 167.261084ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods
	I0804 00:36:19.714216   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods
	I0804 00:36:19.714221   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:19.714242   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:19.714247   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:19.720934   21140 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0804 00:36:19.729908   21140 system_pods.go:59] 24 kube-system pods found
	I0804 00:36:19.729934   21140 system_pods.go:61] "coredns-7db6d8ff4d-cqbjc" [d99b5cde-3b5b-4c29-82c4-ec9fa36b4479] Running
	I0804 00:36:19.729939   21140 system_pods.go:61] "coredns-7db6d8ff4d-xt2gb" [2bd541a1-7bf0-4709-b600-365d5527b936] Running
	I0804 00:36:19.729943   21140 system_pods.go:61] "etcd-ha-230158" [dc6a8dde-229d-4857-8f08-dcc8399b1420] Running
	I0804 00:36:19.729947   21140 system_pods.go:61] "etcd-ha-230158-m02" [ed2085f3-8b06-4e15-8ed3-bd434d9aaebb] Running
	I0804 00:36:19.729950   21140 system_pods.go:61] "etcd-ha-230158-m03" [46db3fc8-2779-48a0-94dc-547182e460aa] Running
	I0804 00:36:19.729953   21140 system_pods.go:61] "kindnet-n5cql" [56108054-acd3-48ae-b929-75bd31cbd1ad] Running
	I0804 00:36:19.729956   21140 system_pods.go:61] "kindnet-w86v4" [1435af28-2e6c-4fa4-8315-00d18be70d00] Running
	I0804 00:36:19.729959   21140 system_pods.go:61] "kindnet-wfd5t" [b7ccd328-13aa-4161-8a20-5df8d153592f] Running
	I0804 00:36:19.729963   21140 system_pods.go:61] "kube-apiserver-ha-230158" [8c1d6b4d-e30e-4b30-84ff-f53490a7d9ec] Running
	I0804 00:36:19.729966   21140 system_pods.go:61] "kube-apiserver-ha-230158-m02" [8d384508-62d2-450a-a512-622aac96913a] Running
	I0804 00:36:19.729969   21140 system_pods.go:61] "kube-apiserver-ha-230158-m03" [3a2f9422-7354-47e1-87cc-988fd0e44316] Running
	I0804 00:36:19.729972   21140 system_pods.go:61] "kube-controller-manager-ha-230158" [cf39dcfb-ca37-45e7-9306-456ea22b484c] Running
	I0804 00:36:19.729975   21140 system_pods.go:61] "kube-controller-manager-ha-230158-m02" [c751903c-cb15-4718-87d7-f9ccf79d5869] Running
	I0804 00:36:19.729979   21140 system_pods.go:61] "kube-controller-manager-ha-230158-m03" [c49084bd-2f5d-495b-ba60-9861b0681e5e] Running
	I0804 00:36:19.729982   21140 system_pods.go:61] "kube-proxy-8tgp2" [17ce55b9-8d25-4b4a-9b12-ff2cb84c22fa] Running
	I0804 00:36:19.729988   21140 system_pods.go:61] "kube-proxy-llxx2" [b9fbc18d-404d-4733-a31b-d95ab7e04dfd] Running
	I0804 00:36:19.729990   21140 system_pods.go:61] "kube-proxy-vdn92" [02c77eda-8f0e-49d4-ae42-bbf18d0eeaf5] Running
	I0804 00:36:19.729993   21140 system_pods.go:61] "kube-scheduler-ha-230158" [c24d7658-a418-4a21-8e93-e31af5d65e05] Running
	I0804 00:36:19.729997   21140 system_pods.go:61] "kube-scheduler-ha-230158-m02" [97d10375-f0ca-4e13-bc7b-8d775aea4678] Running
	I0804 00:36:19.730000   21140 system_pods.go:61] "kube-scheduler-ha-230158-m03" [d5f8d184-aa92-4e8b-912d-788ccb98fe32] Running
	I0804 00:36:19.730003   21140 system_pods.go:61] "kube-vip-ha-230158" [f784b7b5-0db7-49f2-bcac-3a0dbeee74dd] Running
	I0804 00:36:19.730006   21140 system_pods.go:61] "kube-vip-ha-230158-m02" [0c04a6aa-7d79-4318-9cd7-b936d3358e19] Running
	I0804 00:36:19.730009   21140 system_pods.go:61] "kube-vip-ha-230158-m03" [d8bb79c6-6ae4-47e2-ad7b-e731f070228c] Running
	I0804 00:36:19.730012   21140 system_pods.go:61] "storage-provisioner" [653e0c50-af0a-4708-aaa9-b0d63616df94] Running
	I0804 00:36:19.730020   21140 system_pods.go:74] duration metric: took 183.175097ms to wait for pod list to return data ...
	I0804 00:36:19.730029   21140 default_sa.go:34] waiting for default service account to be created ...
	I0804 00:36:19.913280   21140 request.go:629] Waited for 183.162867ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/default/serviceaccounts
	I0804 00:36:19.913337   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/default/serviceaccounts
	I0804 00:36:19.913348   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:19.913358   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:19.913362   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:19.916500   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:36:19.916624   21140 default_sa.go:45] found service account: "default"
	I0804 00:36:19.916638   21140 default_sa.go:55] duration metric: took 186.603168ms for default service account to be created ...
	I0804 00:36:19.916645   21140 system_pods.go:116] waiting for k8s-apps to be running ...
	I0804 00:36:20.114154   21140 request.go:629] Waited for 197.446057ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods
	I0804 00:36:20.114216   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods
	I0804 00:36:20.114224   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:20.114258   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:20.114267   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:20.127325   21140 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0804 00:36:20.136207   21140 system_pods.go:86] 24 kube-system pods found
	I0804 00:36:20.136232   21140 system_pods.go:89] "coredns-7db6d8ff4d-cqbjc" [d99b5cde-3b5b-4c29-82c4-ec9fa36b4479] Running
	I0804 00:36:20.136238   21140 system_pods.go:89] "coredns-7db6d8ff4d-xt2gb" [2bd541a1-7bf0-4709-b600-365d5527b936] Running
	I0804 00:36:20.136242   21140 system_pods.go:89] "etcd-ha-230158" [dc6a8dde-229d-4857-8f08-dcc8399b1420] Running
	I0804 00:36:20.136246   21140 system_pods.go:89] "etcd-ha-230158-m02" [ed2085f3-8b06-4e15-8ed3-bd434d9aaebb] Running
	I0804 00:36:20.136250   21140 system_pods.go:89] "etcd-ha-230158-m03" [46db3fc8-2779-48a0-94dc-547182e460aa] Running
	I0804 00:36:20.136254   21140 system_pods.go:89] "kindnet-n5cql" [56108054-acd3-48ae-b929-75bd31cbd1ad] Running
	I0804 00:36:20.136257   21140 system_pods.go:89] "kindnet-w86v4" [1435af28-2e6c-4fa4-8315-00d18be70d00] Running
	I0804 00:36:20.136262   21140 system_pods.go:89] "kindnet-wfd5t" [b7ccd328-13aa-4161-8a20-5df8d153592f] Running
	I0804 00:36:20.136266   21140 system_pods.go:89] "kube-apiserver-ha-230158" [8c1d6b4d-e30e-4b30-84ff-f53490a7d9ec] Running
	I0804 00:36:20.136270   21140 system_pods.go:89] "kube-apiserver-ha-230158-m02" [8d384508-62d2-450a-a512-622aac96913a] Running
	I0804 00:36:20.136274   21140 system_pods.go:89] "kube-apiserver-ha-230158-m03" [3a2f9422-7354-47e1-87cc-988fd0e44316] Running
	I0804 00:36:20.136278   21140 system_pods.go:89] "kube-controller-manager-ha-230158" [cf39dcfb-ca37-45e7-9306-456ea22b484c] Running
	I0804 00:36:20.136286   21140 system_pods.go:89] "kube-controller-manager-ha-230158-m02" [c751903c-cb15-4718-87d7-f9ccf79d5869] Running
	I0804 00:36:20.136289   21140 system_pods.go:89] "kube-controller-manager-ha-230158-m03" [c49084bd-2f5d-495b-ba60-9861b0681e5e] Running
	I0804 00:36:20.136293   21140 system_pods.go:89] "kube-proxy-8tgp2" [17ce55b9-8d25-4b4a-9b12-ff2cb84c22fa] Running
	I0804 00:36:20.136298   21140 system_pods.go:89] "kube-proxy-llxx2" [b9fbc18d-404d-4733-a31b-d95ab7e04dfd] Running
	I0804 00:36:20.136301   21140 system_pods.go:89] "kube-proxy-vdn92" [02c77eda-8f0e-49d4-ae42-bbf18d0eeaf5] Running
	I0804 00:36:20.136305   21140 system_pods.go:89] "kube-scheduler-ha-230158" [c24d7658-a418-4a21-8e93-e31af5d65e05] Running
	I0804 00:36:20.136310   21140 system_pods.go:89] "kube-scheduler-ha-230158-m02" [97d10375-f0ca-4e13-bc7b-8d775aea4678] Running
	I0804 00:36:20.136315   21140 system_pods.go:89] "kube-scheduler-ha-230158-m03" [d5f8d184-aa92-4e8b-912d-788ccb98fe32] Running
	I0804 00:36:20.136319   21140 system_pods.go:89] "kube-vip-ha-230158" [f784b7b5-0db7-49f2-bcac-3a0dbeee74dd] Running
	I0804 00:36:20.136323   21140 system_pods.go:89] "kube-vip-ha-230158-m02" [0c04a6aa-7d79-4318-9cd7-b936d3358e19] Running
	I0804 00:36:20.136330   21140 system_pods.go:89] "kube-vip-ha-230158-m03" [d8bb79c6-6ae4-47e2-ad7b-e731f070228c] Running
	I0804 00:36:20.136333   21140 system_pods.go:89] "storage-provisioner" [653e0c50-af0a-4708-aaa9-b0d63616df94] Running
	I0804 00:36:20.136339   21140 system_pods.go:126] duration metric: took 219.689305ms to wait for k8s-apps to be running ...
	I0804 00:36:20.136348   21140 system_svc.go:44] waiting for kubelet service to be running ....
	I0804 00:36:20.136386   21140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:36:20.153198   21140 system_svc.go:56] duration metric: took 16.84159ms WaitForService to wait for kubelet
	I0804 00:36:20.153240   21140 kubeadm.go:582] duration metric: took 23.638221933s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0804 00:36:20.153270   21140 node_conditions.go:102] verifying NodePressure condition ...
	I0804 00:36:20.313629   21140 request.go:629] Waited for 160.279047ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes
	I0804 00:36:20.313684   21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes
	I0804 00:36:20.313690   21140 round_trippers.go:469] Request Headers:
	I0804 00:36:20.313697   21140 round_trippers.go:473]     Accept: application/json, */*
	I0804 00:36:20.313702   21140 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 00:36:20.317377   21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 00:36:20.318808   21140 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0804 00:36:20.318827   21140 node_conditions.go:123] node cpu capacity is 2
	I0804 00:36:20.318839   21140 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0804 00:36:20.318842   21140 node_conditions.go:123] node cpu capacity is 2
	I0804 00:36:20.318845   21140 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0804 00:36:20.318848   21140 node_conditions.go:123] node cpu capacity is 2
	I0804 00:36:20.318851   21140 node_conditions.go:105] duration metric: took 165.576428ms to run NodePressure ...
	I0804 00:36:20.318862   21140 start.go:241] waiting for startup goroutines ...
	I0804 00:36:20.318882   21140 start.go:255] writing updated cluster config ...
	I0804 00:36:20.319145   21140 ssh_runner.go:195] Run: rm -f paused
	I0804 00:36:20.367562   21140 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0804 00:36:20.369783   21140 out.go:177] * Done! kubectl is now configured to use "ha-230158" cluster and "default" namespace by default
	
	
	==> Docker <==
	Aug 04 00:33:51 ha-230158 cri-dockerd[1092]: time="2024-08-04T00:33:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/009a8093717e550676eeaf55e6e91ec382fed4759cb3cca76cd44e62049adf56/resolv.conf as [nameserver 192.168.122.1]"
	Aug 04 00:33:51 ha-230158 cri-dockerd[1092]: time="2024-08-04T00:33:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/06fad541ab06ffd1f3e824b90aa63d710251f7fa87d56e35541370ada2f7553e/resolv.conf as [nameserver 192.168.122.1]"
	Aug 04 00:33:51 ha-230158 cri-dockerd[1092]: time="2024-08-04T00:33:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7a62698d656ed8bc98a6334b4542ba4f5ecc61afc972b99c2e5ef586f1c88c14/resolv.conf as [nameserver 192.168.122.1]"
	Aug 04 00:33:52 ha-230158 dockerd[1202]: time="2024-08-04T00:33:52.015145680Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 04 00:33:52 ha-230158 dockerd[1202]: time="2024-08-04T00:33:52.015399497Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 04 00:33:52 ha-230158 dockerd[1202]: time="2024-08-04T00:33:52.015411491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 04 00:33:52 ha-230158 dockerd[1202]: time="2024-08-04T00:33:52.015513548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 04 00:33:52 ha-230158 dockerd[1202]: time="2024-08-04T00:33:52.041452513Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 04 00:33:52 ha-230158 dockerd[1202]: time="2024-08-04T00:33:52.041717458Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 04 00:33:52 ha-230158 dockerd[1202]: time="2024-08-04T00:33:52.041749290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 04 00:33:52 ha-230158 dockerd[1202]: time="2024-08-04T00:33:52.042276749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 04 00:33:52 ha-230158 dockerd[1202]: time="2024-08-04T00:33:52.105541332Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 04 00:33:52 ha-230158 dockerd[1202]: time="2024-08-04T00:33:52.105752619Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 04 00:33:52 ha-230158 dockerd[1202]: time="2024-08-04T00:33:52.105862987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 04 00:33:52 ha-230158 dockerd[1202]: time="2024-08-04T00:33:52.106364455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 04 00:36:21 ha-230158 dockerd[1202]: time="2024-08-04T00:36:21.812831312Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 04 00:36:21 ha-230158 dockerd[1202]: time="2024-08-04T00:36:21.812977993Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 04 00:36:21 ha-230158 dockerd[1202]: time="2024-08-04T00:36:21.815236704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 04 00:36:21 ha-230158 dockerd[1202]: time="2024-08-04T00:36:21.815385085Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 04 00:36:21 ha-230158 cri-dockerd[1092]: time="2024-08-04T00:36:21Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fd1edd455378c9ebd00d93d6f0a55aab769884307524020f5bc39507f5df1acd/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 04 00:36:23 ha-230158 cri-dockerd[1092]: time="2024-08-04T00:36:23Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Aug 04 00:36:23 ha-230158 dockerd[1202]: time="2024-08-04T00:36:23.331366306Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 04 00:36:23 ha-230158 dockerd[1202]: time="2024-08-04T00:36:23.332077553Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 04 00:36:23 ha-230158 dockerd[1202]: time="2024-08-04T00:36:23.332409422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 04 00:36:23 ha-230158 dockerd[1202]: time="2024-08-04T00:36:23.333025484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	69954bb3c52d0       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   3 minutes ago       Running             busybox                   0                   fd1edd455378c       busybox-fc5497c4f-zkdbc
	4507a06a5c525       cbb01a7bd410d                                                                                         6 minutes ago       Running             coredns                   0                   7a62698d656ed       coredns-7db6d8ff4d-xt2gb
	6bf6de750968a       cbb01a7bd410d                                                                                         6 minutes ago       Running             coredns                   0                   009a8093717e5       coredns-7db6d8ff4d-cqbjc
	7c239c3990b6e       6e38f40d628db                                                                                         6 minutes ago       Running             storage-provisioner       0                   06fad541ab06f       storage-provisioner
	210ee81e70d86       kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a              6 minutes ago       Running             kindnet-cni               0                   9e15e12f7e085       kindnet-wfd5t
	7cbd24fa0e03b       55bb025d2cfa5                                                                                         6 minutes ago       Running             kube-proxy                0                   5dde6fe74ac82       kube-proxy-vdn92
	a95a3373ad39b       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   7a020dfa73795       kube-vip-ha-230158
	88a839c50b4a3       3861cfcd7c04c                                                                                         7 minutes ago       Running             etcd                      0                   888fe0699f5db       etcd-ha-230158
	91915a79609ad       1f6d574d502f3                                                                                         7 minutes ago       Running             kube-apiserver            0                   211ccf8ecbfd1       kube-apiserver-ha-230158
	f1d34bc5f7153       3edc18e7b7672                                                                                         7 minutes ago       Running             kube-scheduler            0                   708cdd025b014       kube-scheduler-ha-230158
	0493928ca9b85       76932a3b37d7e                                                                                         7 minutes ago       Running             kube-controller-manager   0                   8a83f286e0d46       kube-controller-manager-ha-230158
	
	
	==> coredns [4507a06a5c52] <==
	[INFO] 10.244.0.4:35608 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000124456s
	[INFO] 10.244.0.4:51919 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.002076181s
	[INFO] 10.244.1.2:59825 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117139s
	[INFO] 10.244.1.2:57254 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00190594s
	[INFO] 10.244.1.2:39544 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00012697s
	[INFO] 10.244.2.2:57787 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00017108s
	[INFO] 10.244.2.2:33350 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001178116s
	[INFO] 10.244.2.2:49475 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000146562s
	[INFO] 10.244.2.2:42126 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000210258s
	[INFO] 10.244.0.4:48459 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124259s
	[INFO] 10.244.0.4:35309 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000155423s
	[INFO] 10.244.0.4:52239 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000040083s
	[INFO] 10.244.1.2:38884 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000161915s
	[INFO] 10.244.1.2:56249 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000257048s
	[INFO] 10.244.1.2:51787 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000095902s
	[INFO] 10.244.2.2:53443 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000092673s
	[INFO] 10.244.2.2:43029 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092806s
	[INFO] 10.244.2.2:40101 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000083123s
	[INFO] 10.244.0.4:53268 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000093003s
	[INFO] 10.244.0.4:43144 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000082857s
	[INFO] 10.244.1.2:53993 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000215455s
	[INFO] 10.244.2.2:54482 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129926s
	[INFO] 10.244.2.2:37912 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000164869s
	[INFO] 10.244.0.4:42684 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000091614s
	[INFO] 10.244.0.4:42052 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000114145s
	
	
	==> coredns [6bf6de750968] <==
	[INFO] 10.244.1.2:34476 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003812454s
	[INFO] 10.244.1.2:36251 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000182778s
	[INFO] 10.244.1.2:52606 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000167278s
	[INFO] 10.244.1.2:54930 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000130115s
	[INFO] 10.244.1.2:38743 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000116259s
	[INFO] 10.244.2.2:55784 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00189069s
	[INFO] 10.244.2.2:45571 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000113702s
	[INFO] 10.244.2.2:34311 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00007235s
	[INFO] 10.244.2.2:43608 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000127033s
	[INFO] 10.244.0.4:35071 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001631133s
	[INFO] 10.244.0.4:40853 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000044285s
	[INFO] 10.244.0.4:53127 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00127252s
	[INFO] 10.244.0.4:56586 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00005511s
	[INFO] 10.244.0.4:50880 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00004863s
	[INFO] 10.244.1.2:58534 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000101406s
	[INFO] 10.244.2.2:36136 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152059s
	[INFO] 10.244.0.4:44755 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115732s
	[INFO] 10.244.0.4:36492 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000200582s
	[INFO] 10.244.1.2:34304 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137164s
	[INFO] 10.244.1.2:57141 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000207512s
	[INFO] 10.244.1.2:33291 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000266329s
	[INFO] 10.244.2.2:60551 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000213328s
	[INFO] 10.244.2.2:39454 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000134305s
	[INFO] 10.244.0.4:55561 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000063379s
	[INFO] 10.244.0.4:49797 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000109389s
	
	
	==> describe nodes <==
	Name:               ha-230158
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-230158
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=40edfa2c305b426121bc436e9f593c10662235e6
	                    minikube.k8s.io/name=ha-230158
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_04T00_33_21_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 04 Aug 2024 00:33:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-230158
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 04 Aug 2024 00:40:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 04 Aug 2024 00:36:55 +0000   Sun, 04 Aug 2024 00:33:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 04 Aug 2024 00:36:55 +0000   Sun, 04 Aug 2024 00:33:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 04 Aug 2024 00:36:55 +0000   Sun, 04 Aug 2024 00:33:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 04 Aug 2024 00:36:55 +0000   Sun, 04 Aug 2024 00:33:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.132
	  Hostname:    ha-230158
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 abc2ed2fdf234afab4b4880adb15e874
	  System UUID:                abc2ed2f-df23-4afa-b4b4-880adb15e874
	  Boot ID:                    2ca41502-5213-44ab-89e1-9b63019791e1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-zkdbc              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m54s
	  kube-system                 coredns-7db6d8ff4d-cqbjc             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m42s
	  kube-system                 coredns-7db6d8ff4d-xt2gb             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m42s
	  kube-system                 etcd-ha-230158                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m55s
	  kube-system                 kindnet-wfd5t                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m42s
	  kube-system                 kube-apiserver-ha-230158             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m55s
	  kube-system                 kube-controller-manager-ha-230158    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m55s
	  kube-system                 kube-proxy-vdn92                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m42s
	  kube-system                 kube-scheduler-ha-230158             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m55s
	  kube-system                 kube-vip-ha-230158                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m55s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m38s  kube-proxy       
	  Normal  Starting                 6m55s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m55s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m55s  kubelet          Node ha-230158 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m55s  kubelet          Node ha-230158 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m55s  kubelet          Node ha-230158 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m43s  node-controller  Node ha-230158 event: Registered Node ha-230158 in Controller
	  Normal  NodeReady                6m24s  kubelet          Node ha-230158 status is now: NodeReady
	  Normal  RegisteredNode           5m24s  node-controller  Node ha-230158 event: Registered Node ha-230158 in Controller
	  Normal  RegisteredNode           4m4s   node-controller  Node ha-230158 event: Registered Node ha-230158 in Controller
	
	
	Name:               ha-230158-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-230158-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=40edfa2c305b426121bc436e9f593c10662235e6
	                    minikube.k8s.io/name=ha-230158
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_04T00_34_35_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 04 Aug 2024 00:34:32 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-230158-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 04 Aug 2024 00:37:36 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sun, 04 Aug 2024 00:36:34 +0000   Sun, 04 Aug 2024 00:38:16 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sun, 04 Aug 2024 00:36:34 +0000   Sun, 04 Aug 2024 00:38:16 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sun, 04 Aug 2024 00:36:34 +0000   Sun, 04 Aug 2024 00:38:16 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sun, 04 Aug 2024 00:36:34 +0000   Sun, 04 Aug 2024 00:38:16 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.188
	  Hostname:    ha-230158-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 155844a39b0945b984994c69c6243cc5
	  System UUID:                155844a3-9b09-45b9-8499-4c69c6243cc5
	  Boot ID:                    071fc9b5-660b-440f-97f2-8a7bd3388cf4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-v69qb                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m54s
	  kube-system                 etcd-ha-230158-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m40s
	  kube-system                 kindnet-n5cql                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m43s
	  kube-system                 kube-apiserver-ha-230158-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m40s
	  kube-system                 kube-controller-manager-ha-230158-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m40s
	  kube-system                 kube-proxy-8tgp2                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m43s
	  kube-system                 kube-scheduler-ha-230158-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m40s
	  kube-system                 kube-vip-ha-230158-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m38s                  kube-proxy       
	  Normal  RegisteredNode           5m43s                  node-controller  Node ha-230158-m02 event: Registered Node ha-230158-m02 in Controller
	  Normal  NodeHasSufficientMemory  5m43s (x8 over 5m43s)  kubelet          Node ha-230158-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m43s (x8 over 5m43s)  kubelet          Node ha-230158-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m43s (x7 over 5m43s)  kubelet          Node ha-230158-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m43s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m24s                  node-controller  Node ha-230158-m02 event: Registered Node ha-230158-m02 in Controller
	  Normal  RegisteredNode           4m4s                   node-controller  Node ha-230158-m02 event: Registered Node ha-230158-m02 in Controller
	  Normal  NodeNotReady             119s                   node-controller  Node ha-230158-m02 status is now: NodeNotReady
	
	
	Name:               ha-230158-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-230158-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=40edfa2c305b426121bc436e9f593c10662235e6
	                    minikube.k8s.io/name=ha-230158
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_04T00_35_56_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 04 Aug 2024 00:35:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-230158-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 04 Aug 2024 00:40:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 04 Aug 2024 00:36:53 +0000   Sun, 04 Aug 2024 00:35:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 04 Aug 2024 00:36:53 +0000   Sun, 04 Aug 2024 00:35:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 04 Aug 2024 00:36:53 +0000   Sun, 04 Aug 2024 00:35:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 04 Aug 2024 00:36:53 +0000   Sun, 04 Aug 2024 00:36:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.35
	  Hostname:    ha-230158-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5ae15621aea546b8af7efab921bf3880
	  System UUID:                5ae15621-aea5-46b8-af7e-fab921bf3880
	  Boot ID:                    bb1d0beb-bec0-4188-b329-44d539b745da
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-zdhsb                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m54s
	  kube-system                 etcd-ha-230158-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m20s
	  kube-system                 kindnet-w86v4                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m23s
	  kube-system                 kube-apiserver-ha-230158-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m20s
	  kube-system                 kube-controller-manager-ha-230158-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m19s
	  kube-system                 kube-proxy-llxx2                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m23s
	  kube-system                 kube-scheduler-ha-230158-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m19s
	  kube-system                 kube-vip-ha-230158-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m18s                  kube-proxy       
	  Normal  RegisteredNode           4m23s                  node-controller  Node ha-230158-m03 event: Registered Node ha-230158-m03 in Controller
	  Normal  NodeHasSufficientMemory  4m23s (x8 over 4m23s)  kubelet          Node ha-230158-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m23s (x8 over 4m23s)  kubelet          Node ha-230158-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m23s (x7 over 4m23s)  kubelet          Node ha-230158-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m19s                  node-controller  Node ha-230158-m03 event: Registered Node ha-230158-m03 in Controller
	  Normal  RegisteredNode           4m4s                   node-controller  Node ha-230158-m03 event: Registered Node ha-230158-m03 in Controller
	
	
	Name:               ha-230158-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-230158-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=40edfa2c305b426121bc436e9f593c10662235e6
	                    minikube.k8s.io/name=ha-230158
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_04T00_37_02_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 04 Aug 2024 00:37:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-230158-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 04 Aug 2024 00:40:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 04 Aug 2024 00:37:32 +0000   Sun, 04 Aug 2024 00:37:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 04 Aug 2024 00:37:32 +0000   Sun, 04 Aug 2024 00:37:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 04 Aug 2024 00:37:32 +0000   Sun, 04 Aug 2024 00:37:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 04 Aug 2024 00:37:32 +0000   Sun, 04 Aug 2024 00:37:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.165
	  Hostname:    ha-230158-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 08c1193a347249feb43822416db43ec8
	  System UUID:                08c1193a-3472-49fe-b438-22416db43ec8
	  Boot ID:                    f2cdda26-4ab8-4a1c-82a1-33749eddad4c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-6mhjl       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m14s
	  kube-system                 kube-proxy-b72ff    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m8s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  3m14s (x2 over 3m14s)  kubelet          Node ha-230158-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m14s (x2 over 3m14s)  kubelet          Node ha-230158-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m14s (x2 over 3m14s)  kubelet          Node ha-230158-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m13s                  node-controller  Node ha-230158-m04 event: Registered Node ha-230158-m04 in Controller
	  Normal  RegisteredNode           3m9s                   node-controller  Node ha-230158-m04 event: Registered Node ha-230158-m04 in Controller
	  Normal  RegisteredNode           3m9s                   node-controller  Node ha-230158-m04 event: Registered Node ha-230158-m04 in Controller
	  Normal  NodeReady                2m51s                  kubelet          Node ha-230158-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +4.583120] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.785040] systemd-fstab-generator[507]: Ignoring "noauto" option for root device
	[  +0.060007] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053342] systemd-fstab-generator[518]: Ignoring "noauto" option for root device
	[  +2.043378] systemd-fstab-generator[769]: Ignoring "noauto" option for root device
	[  +0.288136] systemd-fstab-generator[805]: Ignoring "noauto" option for root device
	[  +0.114492] systemd-fstab-generator[817]: Ignoring "noauto" option for root device
	[  +0.129550] systemd-fstab-generator[831]: Ignoring "noauto" option for root device
	[  +2.372169] kauditd_printk_skb: 223 callbacks suppressed
	[  +0.111879] systemd-fstab-generator[1045]: Ignoring "noauto" option for root device
	[  +0.114388] systemd-fstab-generator[1057]: Ignoring "noauto" option for root device
	[Aug 4 00:33] systemd-fstab-generator[1069]: Ignoring "noauto" option for root device
	[  +0.148135] systemd-fstab-generator[1084]: Ignoring "noauto" option for root device
	[  +3.514231] systemd-fstab-generator[1187]: Ignoring "noauto" option for root device
	[  +3.917614] kauditd_printk_skb: 132 callbacks suppressed
	[  +0.499073] systemd-fstab-generator[1443]: Ignoring "noauto" option for root device
	[  +3.989671] systemd-fstab-generator[1628]: Ignoring "noauto" option for root device
	[  +0.588004] kauditd_printk_skb: 82 callbacks suppressed
	[  +6.808944] systemd-fstab-generator[2124]: Ignoring "noauto" option for root device
	[  +0.086854] kauditd_printk_skb: 53 callbacks suppressed
	[ +15.709067] kauditd_printk_skb: 12 callbacks suppressed
	[ +15.629676] kauditd_printk_skb: 38 callbacks suppressed
	[Aug 4 00:34] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [88a839c50b4a] <==
	{"level":"warn","ts":"2024-08-04T00:39:47.819125Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"5875b509f8714909","rtt":"10.471321ms","error":"dial tcp 192.168.39.188:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-04T00:39:50.332294Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.188:2380/version","remote-member-id":"5875b509f8714909","error":"Get \"https://192.168.39.188:2380/version\": dial tcp 192.168.39.188:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-04T00:39:50.332364Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"5875b509f8714909","error":"Get \"https://192.168.39.188:2380/version\": dial tcp 192.168.39.188:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-04T00:39:52.819873Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"5875b509f8714909","rtt":"10.471321ms","error":"dial tcp 192.168.39.188:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-04T00:39:52.819991Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"5875b509f8714909","rtt":"923.25µs","error":"dial tcp 192.168.39.188:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-04T00:39:54.333882Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.188:2380/version","remote-member-id":"5875b509f8714909","error":"Get \"https://192.168.39.188:2380/version\": dial tcp 192.168.39.188:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-04T00:39:54.334019Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"5875b509f8714909","error":"Get \"https://192.168.39.188:2380/version\": dial tcp 192.168.39.188:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-04T00:39:57.820335Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"5875b509f8714909","rtt":"923.25µs","error":"dial tcp 192.168.39.188:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-04T00:39:57.82035Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"5875b509f8714909","rtt":"10.471321ms","error":"dial tcp 192.168.39.188:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-04T00:39:58.336814Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.188:2380/version","remote-member-id":"5875b509f8714909","error":"Get \"https://192.168.39.188:2380/version\": dial tcp 192.168.39.188:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-04T00:39:58.336872Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"5875b509f8714909","error":"Get \"https://192.168.39.188:2380/version\": dial tcp 192.168.39.188:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-04T00:40:02.339032Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.188:2380/version","remote-member-id":"5875b509f8714909","error":"Get \"https://192.168.39.188:2380/version\": dial tcp 192.168.39.188:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-04T00:40:02.33917Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"5875b509f8714909","error":"Get \"https://192.168.39.188:2380/version\": dial tcp 192.168.39.188:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-04T00:40:02.820879Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"5875b509f8714909","rtt":"923.25µs","error":"dial tcp 192.168.39.188:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-04T00:40:02.821072Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"5875b509f8714909","rtt":"10.471321ms","error":"dial tcp 192.168.39.188:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-04T00:40:06.340975Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.188:2380/version","remote-member-id":"5875b509f8714909","error":"Get \"https://192.168.39.188:2380/version\": dial tcp 192.168.39.188:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-04T00:40:06.341042Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"5875b509f8714909","error":"Get \"https://192.168.39.188:2380/version\": dial tcp 192.168.39.188:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-04T00:40:07.821517Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"5875b509f8714909","rtt":"10.471321ms","error":"dial tcp 192.168.39.188:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-04T00:40:07.82152Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"5875b509f8714909","rtt":"923.25µs","error":"dial tcp 192.168.39.188:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-04T00:40:10.342847Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.188:2380/version","remote-member-id":"5875b509f8714909","error":"Get \"https://192.168.39.188:2380/version\": dial tcp 192.168.39.188:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-04T00:40:10.342911Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"5875b509f8714909","error":"Get \"https://192.168.39.188:2380/version\": dial tcp 192.168.39.188:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-04T00:40:12.822671Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"5875b509f8714909","rtt":"923.25µs","error":"dial tcp 192.168.39.188:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-04T00:40:12.822713Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"5875b509f8714909","rtt":"10.471321ms","error":"dial tcp 192.168.39.188:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-04T00:40:14.344971Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.188:2380/version","remote-member-id":"5875b509f8714909","error":"Get \"https://192.168.39.188:2380/version\": dial tcp 192.168.39.188:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-04T00:40:14.345082Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"5875b509f8714909","error":"Get \"https://192.168.39.188:2380/version\": dial tcp 192.168.39.188:2380: connect: connection refused"}
	
	
	==> kernel <==
	 00:40:15 up 7 min,  0 users,  load average: 0.10, 0.30, 0.18
	Linux ha-230158 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [210ee81e70d8] <==
	I0804 00:39:41.033720       1 main.go:322] Node ha-230158-m04 has CIDR [10.244.3.0/24] 
	I0804 00:39:51.040600       1 main.go:295] Handling node with IPs: map[192.168.39.132:{}]
	I0804 00:39:51.040708       1 main.go:299] handling current node
	I0804 00:39:51.040901       1 main.go:295] Handling node with IPs: map[192.168.39.188:{}]
	I0804 00:39:51.042301       1 main.go:322] Node ha-230158-m02 has CIDR [10.244.1.0/24] 
	I0804 00:39:51.043369       1 main.go:295] Handling node with IPs: map[192.168.39.35:{}]
	I0804 00:39:51.043537       1 main.go:322] Node ha-230158-m03 has CIDR [10.244.2.0/24] 
	I0804 00:39:51.043799       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0804 00:39:51.043948       1 main.go:322] Node ha-230158-m04 has CIDR [10.244.3.0/24] 
	I0804 00:40:01.040357       1 main.go:295] Handling node with IPs: map[192.168.39.132:{}]
	I0804 00:40:01.040525       1 main.go:299] handling current node
	I0804 00:40:01.040717       1 main.go:295] Handling node with IPs: map[192.168.39.188:{}]
	I0804 00:40:01.040857       1 main.go:322] Node ha-230158-m02 has CIDR [10.244.1.0/24] 
	I0804 00:40:01.041402       1 main.go:295] Handling node with IPs: map[192.168.39.35:{}]
	I0804 00:40:01.041491       1 main.go:322] Node ha-230158-m03 has CIDR [10.244.2.0/24] 
	I0804 00:40:01.041820       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0804 00:40:01.041897       1 main.go:322] Node ha-230158-m04 has CIDR [10.244.3.0/24] 
	I0804 00:40:11.040357       1 main.go:295] Handling node with IPs: map[192.168.39.132:{}]
	I0804 00:40:11.040689       1 main.go:299] handling current node
	I0804 00:40:11.040861       1 main.go:295] Handling node with IPs: map[192.168.39.188:{}]
	I0804 00:40:11.040984       1 main.go:322] Node ha-230158-m02 has CIDR [10.244.1.0/24] 
	I0804 00:40:11.041346       1 main.go:295] Handling node with IPs: map[192.168.39.35:{}]
	I0804 00:40:11.041520       1 main.go:322] Node ha-230158-m03 has CIDR [10.244.2.0/24] 
	I0804 00:40:11.041859       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0804 00:40:11.041961       1 main.go:322] Node ha-230158-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [91915a79609a] <==
	W0804 00:33:19.303852       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.132]
	I0804 00:33:19.304903       1 controller.go:615] quota admission added evaluator for: endpoints
	I0804 00:33:19.309111       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0804 00:33:19.636661       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0804 00:33:20.409356       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0804 00:33:20.425521       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0804 00:33:20.605107       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0804 00:33:33.657031       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0804 00:33:33.795963       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0804 00:36:24.892149       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44516: use of closed network connection
	E0804 00:36:25.084585       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44540: use of closed network connection
	E0804 00:36:25.285993       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44556: use of closed network connection
	E0804 00:36:25.488542       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44566: use of closed network connection
	E0804 00:36:25.679566       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44574: use of closed network connection
	E0804 00:36:25.875505       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44584: use of closed network connection
	E0804 00:36:26.062795       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44608: use of closed network connection
	E0804 00:36:26.247872       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44616: use of closed network connection
	E0804 00:36:26.428236       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44648: use of closed network connection
	E0804 00:36:26.718145       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44672: use of closed network connection
	E0804 00:36:26.897778       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44688: use of closed network connection
	E0804 00:36:27.074024       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44712: use of closed network connection
	E0804 00:36:27.253156       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44734: use of closed network connection
	E0804 00:36:27.435418       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44748: use of closed network connection
	E0804 00:36:27.627361       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44774: use of closed network connection
	W0804 00:37:59.317631       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.132 192.168.39.35]
	
	
	==> kube-controller-manager [0493928ca9b8] <==
	I0804 00:35:52.255897       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-230158-m03\" does not exist"
	I0804 00:35:52.270306       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-230158-m03" podCIDRs=["10.244.2.0/24"]
	I0804 00:35:52.930338       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-230158-m03"
	I0804 00:36:21.334116       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="141.738104ms"
	I0804 00:36:21.475587       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="141.412616ms"
	I0804 00:36:21.658949       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="183.297753ms"
	I0804 00:36:21.754512       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="94.003932ms"
	I0804 00:36:21.783367       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.741619ms"
	I0804 00:36:21.785936       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="213.267µs"
	I0804 00:36:22.186175       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.336µs"
	I0804 00:36:22.390963       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="70.47µs"
	I0804 00:36:23.973461       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.54516ms"
	I0804 00:36:23.976453       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="71.983µs"
	I0804 00:36:24.303316       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.154086ms"
	I0804 00:36:24.303449       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.971µs"
	I0804 00:36:24.382687       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.219908ms"
	I0804 00:36:24.383084       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="326.263µs"
	E0804 00:37:01.802121       1 certificate_controller.go:146] Sync csr-j7r77 failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-j7r77": the object has been modified; please apply your changes to the latest version and try again
	I0804 00:37:01.906129       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-230158-m04\" does not exist"
	I0804 00:37:01.938883       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-230158-m04" podCIDRs=["10.244.3.0/24"]
	I0804 00:37:02.944138       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-230158-m04"
	I0804 00:37:24.043089       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-230158-m04"
	I0804 00:38:16.317984       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-230158-m04"
	I0804 00:38:16.484769       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.211974ms"
	I0804 00:38:16.486491       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="1.623755ms"
	
	
	==> kube-proxy [7cbd24fa0e03] <==
	I0804 00:33:36.359628       1 server_linux.go:69] "Using iptables proxy"
	I0804 00:33:36.382157       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.132"]
	I0804 00:33:36.420473       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0804 00:33:36.420529       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0804 00:33:36.420546       1 server_linux.go:165] "Using iptables Proxier"
	I0804 00:33:36.423911       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0804 00:33:36.424454       1 server.go:872] "Version info" version="v1.30.3"
	I0804 00:33:36.424486       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 00:33:36.426533       1 config.go:192] "Starting service config controller"
	I0804 00:33:36.426577       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0804 00:33:36.426790       1 config.go:101] "Starting endpoint slice config controller"
	I0804 00:33:36.426816       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0804 00:33:36.427838       1 config.go:319] "Starting node config controller"
	I0804 00:33:36.427873       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0804 00:33:36.527744       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0804 00:33:36.527987       1 shared_informer.go:320] Caches are synced for node config
	I0804 00:33:36.528032       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [f1d34bc5f715] <==
	E0804 00:33:18.876335       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0804 00:33:18.887399       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0804 00:33:18.887446       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0804 00:33:18.961222       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0804 00:33:18.961583       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0804 00:33:22.154237       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0804 00:36:21.238944       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="cecb795b-aea8-4fed-acac-e99420ca5cf5" pod="default/busybox-fc5497c4f-v69qb" assumedNode="ha-230158-m02" currentNode="ha-230158-m03"
	E0804 00:36:21.268890       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-v69qb\": pod busybox-fc5497c4f-v69qb is already assigned to node \"ha-230158-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-v69qb" node="ha-230158-m03"
	E0804 00:36:21.269285       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod cecb795b-aea8-4fed-acac-e99420ca5cf5(default/busybox-fc5497c4f-v69qb) was assumed on ha-230158-m03 but assigned to ha-230158-m02" pod="default/busybox-fc5497c4f-v69qb"
	E0804 00:36:21.269410       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-v69qb\": pod busybox-fc5497c4f-v69qb is already assigned to node \"ha-230158-m02\"" pod="default/busybox-fc5497c4f-v69qb"
	I0804 00:36:21.269622       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-v69qb" node="ha-230158-m02"
	E0804 00:36:21.339592       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-zkdbc\": pod busybox-fc5497c4f-zkdbc is already assigned to node \"ha-230158\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-zkdbc" node="ha-230158"
	E0804 00:36:21.339732       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-zkdbc\": pod busybox-fc5497c4f-zkdbc is already assigned to node \"ha-230158\"" pod="default/busybox-fc5497c4f-zkdbc"
	E0804 00:37:01.982089       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-b72ff\": pod kube-proxy-b72ff is already assigned to node \"ha-230158-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-b72ff" node="ha-230158-m04"
	E0804 00:37:01.982261       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 17bd64b5-f602-4fdd-aa52-bd291dd235af(kube-system/kube-proxy-b72ff) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-b72ff"
	E0804 00:37:01.982285       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-b72ff\": pod kube-proxy-b72ff is already assigned to node \"ha-230158-m04\"" pod="kube-system/kube-proxy-b72ff"
	I0804 00:37:01.982510       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-b72ff" node="ha-230158-m04"
	E0804 00:37:01.983251       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-6mhjl\": pod kindnet-6mhjl is already assigned to node \"ha-230158-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-6mhjl" node="ha-230158-m04"
	E0804 00:37:01.983297       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod dd391304-a440-45ab-9a55-92422404c4ec(kube-system/kindnet-6mhjl) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-6mhjl"
	E0804 00:37:01.983312       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-6mhjl\": pod kindnet-6mhjl is already assigned to node \"ha-230158-m04\"" pod="kube-system/kindnet-6mhjl"
	I0804 00:37:01.983325       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-6mhjl" node="ha-230158-m04"
	E0804 00:37:02.011560       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-nvzl4\": pod kube-proxy-nvzl4 is already assigned to node \"ha-230158-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-nvzl4" node="ha-230158-m04"
	E0804 00:37:02.011644       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod d59b9e00-f5ee-45a6-ad39-ae31e276f650(kube-system/kube-proxy-nvzl4) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-nvzl4"
	E0804 00:37:02.011947       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-nvzl4\": pod kube-proxy-nvzl4 is already assigned to node \"ha-230158-m04\"" pod="kube-system/kube-proxy-nvzl4"
	I0804 00:37:02.012283       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-nvzl4" node="ha-230158-m04"
	
	
	==> kubelet <==
	Aug 04 00:35:20 ha-230158 kubelet[2131]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 04 00:35:20 ha-230158 kubelet[2131]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 04 00:35:20 ha-230158 kubelet[2131]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 04 00:36:20 ha-230158 kubelet[2131]: E0804 00:36:20.555920    2131 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 04 00:36:20 ha-230158 kubelet[2131]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 04 00:36:20 ha-230158 kubelet[2131]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 04 00:36:20 ha-230158 kubelet[2131]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 04 00:36:20 ha-230158 kubelet[2131]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 04 00:36:21 ha-230158 kubelet[2131]: I0804 00:36:21.341473    2131 topology_manager.go:215] "Topology Admit Handler" podUID="b9e7a29f-edd4-4541-8e8e-05d5d0c41d28" podNamespace="default" podName="busybox-fc5497c4f-zkdbc"
	Aug 04 00:36:21 ha-230158 kubelet[2131]: I0804 00:36:21.449454    2131 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csj9x\" (UniqueName: \"kubernetes.io/projected/b9e7a29f-edd4-4541-8e8e-05d5d0c41d28-kube-api-access-csj9x\") pod \"busybox-fc5497c4f-zkdbc\" (UID: \"b9e7a29f-edd4-4541-8e8e-05d5d0c41d28\") " pod="default/busybox-fc5497c4f-zkdbc"
	Aug 04 00:37:20 ha-230158 kubelet[2131]: E0804 00:37:20.563718    2131 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 04 00:37:20 ha-230158 kubelet[2131]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 04 00:37:20 ha-230158 kubelet[2131]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 04 00:37:20 ha-230158 kubelet[2131]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 04 00:37:20 ha-230158 kubelet[2131]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 04 00:38:20 ha-230158 kubelet[2131]: E0804 00:38:20.550944    2131 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 04 00:38:20 ha-230158 kubelet[2131]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 04 00:38:20 ha-230158 kubelet[2131]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 04 00:38:20 ha-230158 kubelet[2131]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 04 00:38:20 ha-230158 kubelet[2131]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 04 00:39:20 ha-230158 kubelet[2131]: E0804 00:39:20.558349    2131 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 04 00:39:20 ha-230158 kubelet[2131]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 04 00:39:20 ha-230158 kubelet[2131]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 04 00:39:20 ha-230158 kubelet[2131]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 04 00:39:20 ha-230158 kubelet[2131]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-230158 -n ha-230158
helpers_test.go:261: (dbg) Run:  kubectl --context ha-230158 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (138.13s)

                                                
                                    

Test pass (314/349)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 10.32
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.05
9 TestDownloadOnly/v1.20.0/DeleteAll 0.12
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.11
12 TestDownloadOnly/v1.30.3/json-events 4.24
13 TestDownloadOnly/v1.30.3/preload-exists 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.05
18 TestDownloadOnly/v1.30.3/DeleteAll 0.12
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.12
21 TestDownloadOnly/v1.31.0-rc.0/json-events 6.21
22 TestDownloadOnly/v1.31.0-rc.0/preload-exists 0
26 TestDownloadOnly/v1.31.0-rc.0/LogsDuration 0.06
27 TestDownloadOnly/v1.31.0-rc.0/DeleteAll 0.12
28 TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds 0.11
30 TestBinaryMirror 0.54
31 TestOffline 136.19
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.04
36 TestAddons/Setup 247.98
38 TestAddons/serial/Volcano 41.6
40 TestAddons/serial/GCPAuth/Namespaces 0.11
42 TestAddons/parallel/Registry 16.44
43 TestAddons/parallel/Ingress 21.2
44 TestAddons/parallel/InspektorGadget 11.68
45 TestAddons/parallel/MetricsServer 5.63
46 TestAddons/parallel/HelmTiller 12.26
48 TestAddons/parallel/CSI 59.54
49 TestAddons/parallel/Headlamp 18.9
50 TestAddons/parallel/CloudSpanner 5.51
51 TestAddons/parallel/LocalPath 54.86
52 TestAddons/parallel/NvidiaDevicePlugin 5.55
53 TestAddons/parallel/Yakd 11.79
54 TestAddons/StoppedEnableDisable 13.54
55 TestCertOptions 59.53
56 TestCertExpiration 362.91
57 TestDockerFlags 111.43
58 TestForceSystemdFlag 52.75
59 TestForceSystemdEnv 98.46
61 TestKVMDriverInstallOrUpdate 3.55
65 TestErrorSpam/setup 47.73
66 TestErrorSpam/start 0.33
67 TestErrorSpam/status 0.7
68 TestErrorSpam/pause 1.15
69 TestErrorSpam/unpause 1.19
70 TestErrorSpam/stop 16.01
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 63.09
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 40.82
77 TestFunctional/serial/KubeContext 0.04
78 TestFunctional/serial/KubectlGetPods 0.08
81 TestFunctional/serial/CacheCmd/cache/add_remote 2.29
82 TestFunctional/serial/CacheCmd/cache/add_local 1.34
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
84 TestFunctional/serial/CacheCmd/cache/list 0.05
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.21
86 TestFunctional/serial/CacheCmd/cache/cache_reload 1.18
87 TestFunctional/serial/CacheCmd/cache/delete 0.09
88 TestFunctional/serial/MinikubeKubectlCmd 0.1
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
90 TestFunctional/serial/ExtraConfig 62.45
91 TestFunctional/serial/ComponentHealth 0.06
92 TestFunctional/serial/LogsCmd 0.92
93 TestFunctional/serial/LogsFileCmd 0.98
94 TestFunctional/serial/InvalidService 4.59
96 TestFunctional/parallel/ConfigCmd 0.29
97 TestFunctional/parallel/DashboardCmd 34.25
98 TestFunctional/parallel/DryRun 0.26
99 TestFunctional/parallel/InternationalLanguage 0.13
100 TestFunctional/parallel/StatusCmd 0.9
104 TestFunctional/parallel/ServiceCmdConnect 8.46
105 TestFunctional/parallel/AddonsCmd 0.11
106 TestFunctional/parallel/PersistentVolumeClaim 44.97
108 TestFunctional/parallel/SSHCmd 0.4
109 TestFunctional/parallel/CpCmd 1.29
110 TestFunctional/parallel/MySQL 30.85
111 TestFunctional/parallel/FileSync 0.2
112 TestFunctional/parallel/CertSync 1.27
116 TestFunctional/parallel/NodeLabels 0.06
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.21
120 TestFunctional/parallel/License 0.21
121 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
122 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
123 TestFunctional/parallel/ImageCommands/ImageListJson 0.2
124 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
125 TestFunctional/parallel/ImageCommands/ImageBuild 3.51
126 TestFunctional/parallel/ImageCommands/Setup 1.52
127 TestFunctional/parallel/DockerEnv/bash 0.8
128 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
129 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
130 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
131 TestFunctional/parallel/ServiceCmd/DeployApp 12.22
132 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.21
142 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.74
143 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.47
144 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.38
145 TestFunctional/parallel/ImageCommands/ImageRemove 0.4
146 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.69
147 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.34
148 TestFunctional/parallel/ServiceCmd/List 0.43
149 TestFunctional/parallel/ServiceCmd/JSONOutput 0.44
150 TestFunctional/parallel/ServiceCmd/HTTPS 0.29
151 TestFunctional/parallel/ServiceCmd/Format 0.28
152 TestFunctional/parallel/ServiceCmd/URL 0.3
153 TestFunctional/parallel/ProfileCmd/profile_not_create 0.29
154 TestFunctional/parallel/ProfileCmd/profile_list 0.29
155 TestFunctional/parallel/ProfileCmd/profile_json_output 0.35
156 TestFunctional/parallel/MountCmd/any-port 17.9
157 TestFunctional/parallel/Version/short 0.05
158 TestFunctional/parallel/Version/components 0.67
159 TestFunctional/parallel/MountCmd/specific-port 1.6
160 TestFunctional/parallel/MountCmd/VerifyCleanup 1.62
161 TestFunctional/delete_echo-server_images 0.03
162 TestFunctional/delete_my-image_image 0.01
163 TestFunctional/delete_minikube_cached_images 0.02
164 TestGvisorAddon 239.46
167 TestMultiControlPlane/serial/StartCluster 230.22
168 TestMultiControlPlane/serial/DeployApp 5.41
169 TestMultiControlPlane/serial/PingHostFromPods 1.2
170 TestMultiControlPlane/serial/AddWorkerNode 63.33
171 TestMultiControlPlane/serial/NodeLabels 0.06
172 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.51
173 TestMultiControlPlane/serial/CopyFile 12.46
174 TestMultiControlPlane/serial/StopSecondaryNode 13.89
175 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.38
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.5
178 TestMultiControlPlane/serial/RestartClusterKeepsNodes 286.58
179 TestMultiControlPlane/serial/DeleteSecondaryNode 8.11
180 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.36
181 TestMultiControlPlane/serial/StopCluster 38.4
182 TestMultiControlPlane/serial/RestartCluster 160.62
183 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.36
184 TestMultiControlPlane/serial/AddSecondaryNode 84.03
185 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.55
188 TestImageBuild/serial/Setup 51.05
189 TestImageBuild/serial/NormalBuild 2.01
190 TestImageBuild/serial/BuildWithBuildArg 1.04
191 TestImageBuild/serial/BuildWithDockerIgnore 0.75
192 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.78
196 TestJSONOutput/start/Command 65.39
197 TestJSONOutput/start/Audit 0
199 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/pause/Command 0.58
203 TestJSONOutput/pause/Audit 0
205 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
208 TestJSONOutput/unpause/Command 0.55
209 TestJSONOutput/unpause/Audit 0
211 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
212 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
214 TestJSONOutput/stop/Command 9.35
215 TestJSONOutput/stop/Audit 0
217 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
218 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
219 TestErrorJSONOutput 0.18
224 TestMainNoArgs 0.04
225 TestMinikubeProfile 103.15
228 TestMountStart/serial/StartWithMountFirst 32.09
229 TestMountStart/serial/VerifyMountFirst 0.36
230 TestMountStart/serial/StartWithMountSecond 30.91
231 TestMountStart/serial/VerifyMountSecond 0.36
232 TestMountStart/serial/DeleteFirst 0.69
233 TestMountStart/serial/VerifyMountPostDelete 0.36
234 TestMountStart/serial/Stop 2.27
235 TestMountStart/serial/RestartStopped 26.22
236 TestMountStart/serial/VerifyMountPostStop 0.35
239 TestMultiNode/serial/FreshStart2Nodes 137.99
240 TestMultiNode/serial/DeployApp2Nodes 4.23
241 TestMultiNode/serial/PingHostFrom2Pods 0.78
242 TestMultiNode/serial/AddNode 57.58
243 TestMultiNode/serial/MultiNodeLabels 0.06
244 TestMultiNode/serial/ProfileList 0.2
245 TestMultiNode/serial/CopyFile 6.9
246 TestMultiNode/serial/StopNode 3.35
247 TestMultiNode/serial/StartAfterStop 42.43
248 TestMultiNode/serial/RestartKeepsNodes 191.6
249 TestMultiNode/serial/DeleteNode 2.18
250 TestMultiNode/serial/StopMultiNode 25.09
251 TestMultiNode/serial/RestartMultiNode 116.07
252 TestMultiNode/serial/ValidateNameConflict 52.09
257 TestPreload 193.54
259 TestScheduledStopUnix 122.64
260 TestSkaffold 134.39
263 TestRunningBinaryUpgrade 146.22
265 TestKubernetesUpgrade 184.38
278 TestStoppedBinaryUpgrade/Setup 0.43
279 TestStoppedBinaryUpgrade/Upgrade 125.46
281 TestPause/serial/Start 113.62
289 TestStoppedBinaryUpgrade/MinikubeLogs 1.12
291 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
292 TestNoKubernetes/serial/StartWithK8s 69.31
293 TestPause/serial/SecondStartNoReconfiguration 80.3
294 TestNetworkPlugins/group/auto/Start 134.99
295 TestNoKubernetes/serial/StartWithStopK8s 43.16
296 TestNetworkPlugins/group/kindnet/Start 83.15
297 TestPause/serial/Pause 0.61
298 TestPause/serial/VerifyStatus 0.25
299 TestPause/serial/Unpause 0.53
300 TestPause/serial/PauseAgain 0.82
301 TestPause/serial/DeletePaused 1.01
302 TestPause/serial/VerifyDeletedResources 0.64
303 TestNoKubernetes/serial/Start 44.5
304 TestNetworkPlugins/group/calico/Start 139.35
305 TestNoKubernetes/serial/VerifyK8sNotRunning 0.19
306 TestNoKubernetes/serial/ProfileList 1.1
307 TestNoKubernetes/serial/Stop 2.36
308 TestNoKubernetes/serial/StartNoArgs 47.05
309 TestNetworkPlugins/group/auto/KubeletFlags 0.19
310 TestNetworkPlugins/group/auto/NetCatPod 10.23
311 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
312 TestNetworkPlugins/group/auto/DNS 0.22
313 TestNetworkPlugins/group/auto/Localhost 0.17
314 TestNetworkPlugins/group/auto/HairPin 0.16
315 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
316 TestNetworkPlugins/group/kindnet/NetCatPod 13.7
317 TestNetworkPlugins/group/custom-flannel/Start 90.81
318 TestNetworkPlugins/group/kindnet/DNS 0.21
319 TestNetworkPlugins/group/kindnet/Localhost 0.16
320 TestNetworkPlugins/group/kindnet/HairPin 0.16
321 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
322 TestNetworkPlugins/group/false/Start 100.39
323 TestNetworkPlugins/group/enable-default-cni/Start 117.03
324 TestNetworkPlugins/group/calico/ControllerPod 6.01
325 TestNetworkPlugins/group/calico/KubeletFlags 0.2
326 TestNetworkPlugins/group/calico/NetCatPod 11.19
327 TestNetworkPlugins/group/calico/DNS 0.19
328 TestNetworkPlugins/group/calico/Localhost 0.23
329 TestNetworkPlugins/group/calico/HairPin 0.24
330 TestNetworkPlugins/group/flannel/Start 88.15
331 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
332 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.4
333 TestNetworkPlugins/group/custom-flannel/DNS 0.2
334 TestNetworkPlugins/group/custom-flannel/Localhost 0.23
335 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
336 TestNetworkPlugins/group/false/KubeletFlags 0.21
337 TestNetworkPlugins/group/false/NetCatPod 11.24
338 TestNetworkPlugins/group/false/DNS 0.2
339 TestNetworkPlugins/group/false/Localhost 0.16
340 TestNetworkPlugins/group/false/HairPin 0.18
341 TestNetworkPlugins/group/bridge/Start 108
342 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
343 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.31
344 TestNetworkPlugins/group/kubenet/Start 122.66
345 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
346 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
347 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
349 TestStartStop/group/old-k8s-version/serial/FirstStart 159.92
350 TestNetworkPlugins/group/flannel/ControllerPod 6.01
351 TestNetworkPlugins/group/flannel/KubeletFlags 0.21
352 TestNetworkPlugins/group/flannel/NetCatPod 12.22
353 TestNetworkPlugins/group/flannel/DNS 0.21
354 TestNetworkPlugins/group/flannel/Localhost 0.16
355 TestNetworkPlugins/group/flannel/HairPin 0.15
357 TestStartStop/group/no-preload/serial/FirstStart 84.52
358 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
359 TestNetworkPlugins/group/bridge/NetCatPod 11.25
360 TestNetworkPlugins/group/bridge/DNS 0.19
361 TestNetworkPlugins/group/bridge/Localhost 0.14
362 TestNetworkPlugins/group/bridge/HairPin 0.14
364 TestStartStop/group/embed-certs/serial/FirstStart 108.75
365 TestNetworkPlugins/group/kubenet/KubeletFlags 0.29
366 TestNetworkPlugins/group/kubenet/NetCatPod 11.31
367 TestNetworkPlugins/group/kubenet/DNS 0.2
368 TestNetworkPlugins/group/kubenet/Localhost 0.17
369 TestNetworkPlugins/group/kubenet/HairPin 0.16
371 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 77.82
372 TestStartStop/group/no-preload/serial/DeployApp 10.34
373 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.94
374 TestStartStop/group/no-preload/serial/Stop 13.35
375 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
376 TestStartStop/group/no-preload/serial/SecondStart 362.13
377 TestStartStop/group/old-k8s-version/serial/DeployApp 9.6
378 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.04
379 TestStartStop/group/old-k8s-version/serial/Stop 13.37
380 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
381 TestStartStop/group/old-k8s-version/serial/SecondStart 400.85
382 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.3
383 TestStartStop/group/embed-certs/serial/DeployApp 8.34
384 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.05
385 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.38
386 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.14
387 TestStartStop/group/embed-certs/serial/Stop 13.36
388 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
389 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 325.03
390 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
391 TestStartStop/group/embed-certs/serial/SecondStart 342.2
392 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
393 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
394 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.72
395 TestStartStop/group/no-preload/serial/Pause 2.45
397 TestStartStop/group/newest-cni/serial/FirstStart 62.99
398 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
399 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
400 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.2
401 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.51
402 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
403 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
404 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.22
405 TestStartStop/group/embed-certs/serial/Pause 2.57
406 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
407 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
408 TestStartStop/group/newest-cni/serial/DeployApp 0
409 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.81
410 TestStartStop/group/newest-cni/serial/Stop 12.62
411 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.2
412 TestStartStop/group/old-k8s-version/serial/Pause 2.2
413 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
414 TestStartStop/group/newest-cni/serial/SecondStart 38.41
415 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
416 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
417 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.64
418 TestStartStop/group/newest-cni/serial/Pause 2.1
x
+
TestDownloadOnly/v1.20.0/json-events (10.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-634998 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-634998 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=kvm2 : (10.316891752s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (10.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-634998
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-634998: exit status 85 (52.543538ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-634998 | jenkins | v1.33.1 | 04 Aug 24 00:20 UTC |          |
	|         | -p download-only-634998        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/04 00:20:36
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0804 00:20:36.485420   11148 out.go:291] Setting OutFile to fd 1 ...
	I0804 00:20:36.485527   11148 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:20:36.485536   11148 out.go:304] Setting ErrFile to fd 2...
	I0804 00:20:36.485540   11148 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:20:36.485693   11148 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-3947/.minikube/bin
	W0804 00:20:36.485812   11148 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19364-3947/.minikube/config/config.json: open /home/jenkins/minikube-integration/19364-3947/.minikube/config/config.json: no such file or directory
	I0804 00:20:36.486384   11148 out.go:298] Setting JSON to true
	I0804 00:20:36.487226   11148 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":186,"bootTime":1722730650,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 00:20:36.487286   11148 start.go:139] virtualization: kvm guest
	I0804 00:20:36.489487   11148 out.go:97] [download-only-634998] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0804 00:20:36.489587   11148 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19364-3947/.minikube/cache/preloaded-tarball: no such file or directory
	I0804 00:20:36.489649   11148 notify.go:220] Checking for updates...
	I0804 00:20:36.490982   11148 out.go:169] MINIKUBE_LOCATION=19364
	I0804 00:20:36.492217   11148 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 00:20:36.493752   11148 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19364-3947/kubeconfig
	I0804 00:20:36.495409   11148 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-3947/.minikube
	I0804 00:20:36.496880   11148 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0804 00:20:36.499669   11148 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0804 00:20:36.499889   11148 driver.go:392] Setting default libvirt URI to qemu:///system
	I0804 00:20:36.602096   11148 out.go:97] Using the kvm2 driver based on user configuration
	I0804 00:20:36.602122   11148 start.go:297] selected driver: kvm2
	I0804 00:20:36.602128   11148 start.go:901] validating driver "kvm2" against <nil>
	I0804 00:20:36.602462   11148 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 00:20:36.602581   11148 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19364-3947/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0804 00:20:36.617107   11148 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0804 00:20:36.617148   11148 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0804 00:20:36.617659   11148 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0804 00:20:36.617821   11148 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0804 00:20:36.617843   11148 cni.go:84] Creating CNI manager for ""
	I0804 00:20:36.617859   11148 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0804 00:20:36.617910   11148 start.go:340] cluster config:
	{Name:download-only-634998 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-634998 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:20:36.618079   11148 iso.go:125] acquiring lock: {Name:mk61d89caa127145c801001852615ed27862a97f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 00:20:36.619835   11148 out.go:97] Downloading VM boot image ...
	I0804 00:20:36.619881   11148 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19364-3947/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0804 00:20:40.541613   11148 out.go:97] Starting "download-only-634998" primary control-plane node in "download-only-634998" cluster
	I0804 00:20:40.541630   11148 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0804 00:20:40.568139   11148 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0804 00:20:40.568161   11148 cache.go:56] Caching tarball of preloaded images
	I0804 00:20:40.568309   11148 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0804 00:20:40.569938   11148 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0804 00:20:40.569953   11148 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0804 00:20:40.598714   11148 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /home/jenkins/minikube-integration/19364-3947/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0804 00:20:43.674910   11148 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0804 00:20:43.674992   11148 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19364-3947/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0804 00:20:44.538543   11148 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0804 00:20:44.538847   11148 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/download-only-634998/config.json ...
	I0804 00:20:44.538872   11148 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/download-only-634998/config.json: {Name:mk07b852a820fbd5e97f15421b845eacabd79302 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:20:44.539038   11148 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0804 00:20:44.539242   11148 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19364-3947/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-634998 host does not exist
	  To start a cluster, run: "minikube start -p download-only-634998"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-634998
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (4.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-207841 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-207841 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=kvm2 : (4.234862321s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (4.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-207841
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-207841: exit status 85 (51.629155ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-634998 | jenkins | v1.33.1 | 04 Aug 24 00:20 UTC |                     |
	|         | -p download-only-634998        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 04 Aug 24 00:20 UTC | 04 Aug 24 00:20 UTC |
	| delete  | -p download-only-634998        | download-only-634998 | jenkins | v1.33.1 | 04 Aug 24 00:20 UTC | 04 Aug 24 00:20 UTC |
	| start   | -o=json --download-only        | download-only-207841 | jenkins | v1.33.1 | 04 Aug 24 00:20 UTC |                     |
	|         | -p download-only-207841        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/04 00:20:47
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0804 00:20:47.089014   11354 out.go:291] Setting OutFile to fd 1 ...
	I0804 00:20:47.089243   11354 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:20:47.089251   11354 out.go:304] Setting ErrFile to fd 2...
	I0804 00:20:47.089256   11354 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:20:47.089419   11354 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-3947/.minikube/bin
	I0804 00:20:47.089908   11354 out.go:298] Setting JSON to true
	I0804 00:20:47.090689   11354 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":197,"bootTime":1722730650,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 00:20:47.090739   11354 start.go:139] virtualization: kvm guest
	I0804 00:20:47.092561   11354 out.go:97] [download-only-207841] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0804 00:20:47.092639   11354 notify.go:220] Checking for updates...
	I0804 00:20:47.093898   11354 out.go:169] MINIKUBE_LOCATION=19364
	I0804 00:20:47.095097   11354 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 00:20:47.096519   11354 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19364-3947/kubeconfig
	I0804 00:20:47.097865   11354 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-3947/.minikube
	I0804 00:20:47.099034   11354 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-207841 host does not exist
	  To start a cluster, run: "minikube start -p download-only-207841"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-207841
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/json-events (6.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-674375 --force --alsologtostderr --kubernetes-version=v1.31.0-rc.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-674375 --force --alsologtostderr --kubernetes-version=v1.31.0-rc.0 --container-runtime=docker --driver=kvm2 : (6.206652837s)
--- PASS: TestDownloadOnly/v1.31.0-rc.0/json-events (6.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-674375
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-674375: exit status 85 (56.155635ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-634998 | jenkins | v1.33.1 | 04 Aug 24 00:20 UTC |                     |
	|         | -p download-only-634998           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.1 | 04 Aug 24 00:20 UTC | 04 Aug 24 00:20 UTC |
	| delete  | -p download-only-634998           | download-only-634998 | jenkins | v1.33.1 | 04 Aug 24 00:20 UTC | 04 Aug 24 00:20 UTC |
	| start   | -o=json --download-only           | download-only-207841 | jenkins | v1.33.1 | 04 Aug 24 00:20 UTC |                     |
	|         | -p download-only-207841           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.1 | 04 Aug 24 00:20 UTC | 04 Aug 24 00:20 UTC |
	| delete  | -p download-only-207841           | download-only-207841 | jenkins | v1.33.1 | 04 Aug 24 00:20 UTC | 04 Aug 24 00:20 UTC |
	| start   | -o=json --download-only           | download-only-674375 | jenkins | v1.33.1 | 04 Aug 24 00:20 UTC |                     |
	|         | -p download-only-674375           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0 |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/04 00:20:51
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0804 00:20:51.615053   11545 out.go:291] Setting OutFile to fd 1 ...
	I0804 00:20:51.615158   11545 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:20:51.615168   11545 out.go:304] Setting ErrFile to fd 2...
	I0804 00:20:51.615173   11545 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:20:51.615324   11545 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-3947/.minikube/bin
	I0804 00:20:51.615862   11545 out.go:298] Setting JSON to true
	I0804 00:20:51.616778   11545 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":202,"bootTime":1722730650,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 00:20:51.616833   11545 start.go:139] virtualization: kvm guest
	I0804 00:20:51.619189   11545 out.go:97] [download-only-674375] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0804 00:20:51.619330   11545 notify.go:220] Checking for updates...
	I0804 00:20:51.620689   11545 out.go:169] MINIKUBE_LOCATION=19364
	I0804 00:20:51.622129   11545 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 00:20:51.623473   11545 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19364-3947/kubeconfig
	I0804 00:20:51.624899   11545 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-3947/.minikube
	I0804 00:20:51.626063   11545 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0804 00:20:51.628082   11545 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0804 00:20:51.628371   11545 driver.go:392] Setting default libvirt URI to qemu:///system
	I0804 00:20:51.658914   11545 out.go:97] Using the kvm2 driver based on user configuration
	I0804 00:20:51.658932   11545 start.go:297] selected driver: kvm2
	I0804 00:20:51.658937   11545 start.go:901] validating driver "kvm2" against <nil>
	I0804 00:20:51.659309   11545 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 00:20:51.659392   11545 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19364-3947/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0804 00:20:51.672864   11545 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0804 00:20:51.672899   11545 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0804 00:20:51.673313   11545 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0804 00:20:51.673453   11545 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0804 00:20:51.673501   11545 cni.go:84] Creating CNI manager for ""
	I0804 00:20:51.673518   11545 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0804 00:20:51.673525   11545 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0804 00:20:51.673573   11545 start.go:340] cluster config:
	{Name:download-only-674375 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:download-only-674375 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:20:51.673647   11545 iso.go:125] acquiring lock: {Name:mk61d89caa127145c801001852615ed27862a97f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 00:20:51.675334   11545 out.go:97] Starting "download-only-674375" primary control-plane node in "download-only-674375" cluster
	I0804 00:20:51.675351   11545 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0804 00:20:51.697020   11545 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-rc.0/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-amd64.tar.lz4
	I0804 00:20:51.697042   11545 cache.go:56] Caching tarball of preloaded images
	I0804 00:20:51.697142   11545 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0804 00:20:51.698822   11545 out.go:97] Downloading Kubernetes v1.31.0-rc.0 preload ...
	I0804 00:20:51.698836   11545 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-amd64.tar.lz4 ...
	I0804 00:20:51.723323   11545 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-rc.0/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-amd64.tar.lz4?checksum=md5:214beb6d5aadd59deaf940ce47a22f04 -> /home/jenkins/minikube-integration/19364-3947/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-amd64.tar.lz4
	I0804 00:20:54.480171   11545 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-amd64.tar.lz4 ...
	I0804 00:20:54.480272   11545 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19364-3947/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-amd64.tar.lz4 ...
	I0804 00:20:55.124673   11545 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-rc.0 on docker
	I0804 00:20:55.125006   11545 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/download-only-674375/config.json ...
	I0804 00:20:55.125036   11545 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/download-only-674375/config.json: {Name:mk024bf4276cfd6cee4fcf2b706a597d6fcc9d63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:20:55.125219   11545 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0804 00:20:55.125378   11545 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19364-3947/.minikube/cache/linux/amd64/v1.31.0-rc.0/kubectl
	
	
	* The control-plane node download-only-674375 host does not exist
	  To start a cluster, run: "minikube start -p download-only-674375"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-rc.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-rc.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-674375
--- PASS: TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestBinaryMirror (0.54s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-538909 --alsologtostderr --binary-mirror http://127.0.0.1:44555 --driver=kvm2 
helpers_test.go:175: Cleaning up "binary-mirror-538909" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-538909
--- PASS: TestBinaryMirror (0.54s)

                                                
                                    
x
+
TestOffline (136.19s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-140478 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-140478 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 : (2m15.260147221s)
helpers_test.go:175: Cleaning up "offline-docker-140478" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-140478
--- PASS: TestOffline (136.19s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-044946
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-044946: exit status 85 (44.89549ms)

                                                
                                                
-- stdout --
	* Profile "addons-044946" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-044946"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.04s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-044946
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-044946: exit status 85 (44.547294ms)

                                                
                                                
-- stdout --
	* Profile "addons-044946" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-044946"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.04s)

                                                
                                    
x
+
TestAddons/Setup (247.98s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-044946 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-044946 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (4m7.979534447s)
--- PASS: TestAddons/Setup (247.98s)

                                                
                                    
x
+
TestAddons/serial/Volcano (41.6s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:897: volcano-scheduler stabilized in 23.127108ms
addons_test.go:905: volcano-admission stabilized in 23.173779ms
addons_test.go:913: volcano-controller stabilized in 23.227292ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-844f6db89b-m4qfb" [e1b5aa4f-7b7d-44a4-adc8-d70947b05dff] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.004009337s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5f7844f7bc-vwwqc" [11381d9e-a2de-45d0-9146-cdc5a59eaf76] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 6.00300825s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-59cb4746db-nwkhv" [eaa04472-5ffc-4c60-8f5a-6e581cd93bce] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003808156s
addons_test.go:932: (dbg) Run:  kubectl --context addons-044946 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-044946 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-044946 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [10936ab6-8449-4713-8dc2-5dfd338f8c55] Pending
helpers_test.go:344: "test-job-nginx-0" [10936ab6-8449-4713-8dc2-5dfd338f8c55] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [10936ab6-8449-4713-8dc2-5dfd338f8c55] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 15.004897991s
addons_test.go:968: (dbg) Run:  out/minikube-linux-amd64 -p addons-044946 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-linux-amd64 -p addons-044946 addons disable volcano --alsologtostderr -v=1: (10.204991654s)
--- PASS: TestAddons/serial/Volcano (41.60s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-044946 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-044946 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.44s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 2.380735ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-698f998955-bk4s6" [9432dfe1-093c-47f4-bf20-6fa87d39e558] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005763851s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-mc4c5" [b4431ead-d042-46a9-ab5e-cfe4c1fefe54] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004242749s
addons_test.go:342: (dbg) Run:  kubectl --context addons-044946 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-044946 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-044946 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.798520845s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-044946 ip
2024/08/04 00:26:22 [DEBUG] GET http://192.168.39.253:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-044946 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.44s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.2s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-044946 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-044946 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-044946 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [845d4302-27fa-4dde-b68e-b1c99f07c631] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [845d4302-27fa-4dde-b68e-b1c99f07c631] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.004124987s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-044946 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-044946 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-044946 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.253
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-044946 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-044946 addons disable ingress-dns --alsologtostderr -v=1: (1.521818302s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-044946 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-044946 addons disable ingress --alsologtostderr -v=1: (7.609225688s)
--- PASS: TestAddons/parallel/Ingress (21.20s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.68s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-sr5tk" [4e613a08-019a-4eee-9edd-5732366ca684] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.00528658s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-044946
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-044946: (5.672361219s)
--- PASS: TestAddons/parallel/InspektorGadget (11.68s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.63s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.69235ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-ljn46" [bc1dfe67-0c9e-49dc-989a-32861dbd1650] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005453322s
addons_test.go:417: (dbg) Run:  kubectl --context addons-044946 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-044946 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.63s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (12.26s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 3.623351ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-p47gn" [d529d9d5-5dc6-4ad9-bf55-3a8c31d0a4bf] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.00792229s
addons_test.go:475: (dbg) Run:  kubectl --context addons-044946 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-044946 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.681278877s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-044946 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (12.26s)

                                                
                                    
x
+
TestAddons/parallel/CSI (59.54s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 14.680449ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-044946 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-044946 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-044946 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-044946 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-044946 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-044946 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-044946 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-044946 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-044946 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-044946 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-044946 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-044946 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-044946 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-044946 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-044946 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-044946 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-044946 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [580b4218-f5f1-4b53-8ec1-3ddcbd2c62b9] Pending
helpers_test.go:344: "task-pv-pod" [580b4218-f5f1-4b53-8ec1-3ddcbd2c62b9] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [580b4218-f5f1-4b53-8ec1-3ddcbd2c62b9] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.005965501s
addons_test.go:590: (dbg) Run:  kubectl --context addons-044946 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-044946 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-044946 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-044946 delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context addons-044946 delete pod task-pv-pod: (1.526295734s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-044946 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-044946 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-044946 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-044946 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-044946 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-044946 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-044946 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-044946 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-044946 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-044946 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-044946 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-044946 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-044946 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-044946 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-044946 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-044946 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-044946 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-044946 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-044946 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-044946 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [863fdf17-134e-4bdc-b3e4-c1f5d161e587] Pending
helpers_test.go:344: "task-pv-pod-restore" [863fdf17-134e-4bdc-b3e4-c1f5d161e587] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [863fdf17-134e-4bdc-b3e4-c1f5d161e587] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004013696s
addons_test.go:632: (dbg) Run:  kubectl --context addons-044946 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Done: kubectl --context addons-044946 delete pod task-pv-pod-restore: (1.350325048s)
addons_test.go:636: (dbg) Run:  kubectl --context addons-044946 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-044946 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-044946 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-044946 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.810769672s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-044946 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (59.54s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.9s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-044946 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-044946 --alsologtostderr -v=1: (1.188554896s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-9d868696f-zhwx4" [66dc5c34-29d4-4991-b7d6-674893eda759] Pending
helpers_test.go:344: "headlamp-9d868696f-zhwx4" [66dc5c34-29d4-4991-b7d6-674893eda759] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-9d868696f-zhwx4" [66dc5c34-29d4-4991-b7d6-674893eda759] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.003814013s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-044946 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-044946 addons disable headlamp --alsologtostderr -v=1: (5.708523457s)
--- PASS: TestAddons/parallel/Headlamp (18.90s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.51s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5455fb9b69-4dkjq" [a811c98f-01ce-40f5-8625-f3c701ed0537] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.005159002s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-044946
--- PASS: TestAddons/parallel/CloudSpanner (5.51s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.86s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-044946 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-044946 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-044946 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-044946 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-044946 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-044946 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-044946 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-044946 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-044946 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [2302e01b-e25d-489b-8bd6-f27777575720] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [2302e01b-e25d-489b-8bd6-f27777575720] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [2302e01b-e25d-489b-8bd6-f27777575720] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.004591116s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-044946 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-044946 ssh "cat /opt/local-path-provisioner/pvc-7f42f0de-47e0-4920-be63-9e3d0c37e3b1_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-044946 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-044946 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-044946 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-amd64 -p addons-044946 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.081223499s)
--- PASS: TestAddons/parallel/LocalPath (54.86s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.55s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-5mrtg" [81741b08-55cf-4f82-b575-0b984b53882a] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.011211572s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-044946
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.55s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.79s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-nzgzf" [9860c7b2-726a-4e21-bb35-ad29507da57a] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.005476405s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-044946 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-044946 addons disable yakd --alsologtostderr -v=1: (5.786204399s)
--- PASS: TestAddons/parallel/Yakd (11.79s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.54s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-044946
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-044946: (13.28217967s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-044946
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-044946
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-044946
--- PASS: TestAddons/StoppedEnableDisable (13.54s)

                                                
                                    
x
+
TestCertOptions (59.53s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-539023 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 
E0804 01:19:50.038419   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/addons-044946/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-539023 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (57.810581087s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-539023 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-539023 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-539023 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-539023" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-539023
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-539023: (1.28000458s)
--- PASS: TestCertOptions (59.53s)

                                                
                                    
x
+
TestCertExpiration (362.91s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-547739 --memory=2048 --cert-expiration=3m --driver=kvm2 
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-547739 --memory=2048 --cert-expiration=3m --driver=kvm2 : (2m11.725109135s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-547739 --memory=2048 --cert-expiration=8760h --driver=kvm2 
E0804 01:19:14.185724   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/skaffold-042002/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-547739 --memory=2048 --cert-expiration=8760h --driver=kvm2 : (50.089762888s)
helpers_test.go:175: Cleaning up "cert-expiration-547739" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-547739
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-547739: (1.089680174s)
--- PASS: TestCertExpiration (362.91s)

                                                
                                    
x
+
TestDockerFlags (111.43s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-081246 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 
E0804 01:15:06.990588   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/addons-044946/client.crt: no such file or directory
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-081246 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (1m49.989149582s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-081246 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-081246 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-081246" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-081246
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-081246: (1.048030311s)
--- PASS: TestDockerFlags (111.43s)

                                                
                                    
x
+
TestForceSystemdFlag (52.75s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-256206 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-256206 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (51.49952617s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-256206 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-256206" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-256206
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-256206: (1.006547212s)
--- PASS: TestForceSystemdFlag (52.75s)

                                                
                                    
x
+
TestForceSystemdEnv (98.46s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-192700 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 
E0804 01:19:55.146504   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/skaffold-042002/client.crt: no such file or directory
E0804 01:20:06.990312   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/addons-044946/client.crt: no such file or directory
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-192700 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 : (1m37.039905375s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-192700 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-192700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-192700
E0804 01:21:32.699219   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/gvisor-169607/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-192700: (1.153262241s)
--- PASS: TestForceSystemdEnv (98.46s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.55s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.55s)

                                                
                                    
x
+
TestErrorSpam/setup (47.73s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-986458 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-986458 --driver=kvm2 
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-986458 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-986458 --driver=kvm2 : (47.731031228s)
--- PASS: TestErrorSpam/setup (47.73s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-986458 --log_dir /tmp/nospam-986458 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-986458 --log_dir /tmp/nospam-986458 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-986458 --log_dir /tmp/nospam-986458 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.7s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-986458 --log_dir /tmp/nospam-986458 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-986458 --log_dir /tmp/nospam-986458 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-986458 --log_dir /tmp/nospam-986458 status
--- PASS: TestErrorSpam/status (0.70s)

                                                
                                    
x
+
TestErrorSpam/pause (1.15s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-986458 --log_dir /tmp/nospam-986458 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-986458 --log_dir /tmp/nospam-986458 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-986458 --log_dir /tmp/nospam-986458 pause
--- PASS: TestErrorSpam/pause (1.15s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.19s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-986458 --log_dir /tmp/nospam-986458 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-986458 --log_dir /tmp/nospam-986458 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-986458 --log_dir /tmp/nospam-986458 unpause
--- PASS: TestErrorSpam/unpause (1.19s)

                                                
                                    
x
+
TestErrorSpam/stop (16.01s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-986458 --log_dir /tmp/nospam-986458 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-986458 --log_dir /tmp/nospam-986458 stop: (12.408294221s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-986458 --log_dir /tmp/nospam-986458 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-986458 --log_dir /tmp/nospam-986458 stop: (2.04957998s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-986458 --log_dir /tmp/nospam-986458 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-986458 --log_dir /tmp/nospam-986458 stop: (1.54863857s)
--- PASS: TestErrorSpam/stop (16.01s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19364-3947/.minikube/files/etc/test/nested/copy/11136/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (63.09s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-168863 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-168863 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 : (1m3.087737726s)
--- PASS: TestFunctional/serial/StartWithProxy (63.09s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (40.82s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-168863 --alsologtostderr -v=8
E0804 00:30:06.990768   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/addons-044946/client.crt: no such file or directory
E0804 00:30:06.996497   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/addons-044946/client.crt: no such file or directory
E0804 00:30:07.006742   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/addons-044946/client.crt: no such file or directory
E0804 00:30:07.027055   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/addons-044946/client.crt: no such file or directory
E0804 00:30:07.067338   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/addons-044946/client.crt: no such file or directory
E0804 00:30:07.147705   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/addons-044946/client.crt: no such file or directory
E0804 00:30:07.308356   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/addons-044946/client.crt: no such file or directory
E0804 00:30:07.628965   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/addons-044946/client.crt: no such file or directory
E0804 00:30:08.269427   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/addons-044946/client.crt: no such file or directory
E0804 00:30:09.550220   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/addons-044946/client.crt: no such file or directory
E0804 00:30:12.111037   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/addons-044946/client.crt: no such file or directory
E0804 00:30:17.231651   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/addons-044946/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-168863 --alsologtostderr -v=8: (40.822401521s)
functional_test.go:659: soft start took 40.822983716s for "functional-168863" cluster.
--- PASS: TestFunctional/serial/SoftStart (40.82s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-168863 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-168863 /tmp/TestFunctionalserialCacheCmdcacheadd_local2498063302/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 cache add minikube-local-cache-test:functional-168863
E0804 00:30:27.472715   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/addons-044946/client.crt: no such file or directory
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 cache delete minikube-local-cache-test:functional-168863
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-168863
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-168863 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (209.253885ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 kubectl -- --context functional-168863 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-168863 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (62.45s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-168863 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0804 00:30:47.953541   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/addons-044946/client.crt: no such file or directory
E0804 00:31:28.913976   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/addons-044946/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-168863 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m2.447896318s)
functional_test.go:757: restart took 1m2.447998438s for "functional-168863" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (62.45s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-168863 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.92s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 logs
--- PASS: TestFunctional/serial/LogsCmd (0.92s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.98s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 logs --file /tmp/TestFunctionalserialLogsFileCmd1320574803/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.98s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.59s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-168863 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-168863
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-168863: exit status 115 (270.039321ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.118:31333 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-168863 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-168863 delete -f testdata/invalidsvc.yaml: (1.126410529s)
--- PASS: TestFunctional/serial/InvalidService (4.59s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-168863 config get cpus: exit status 14 (42.75173ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-168863 config get cpus: exit status 14 (49.262192ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (34.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-168863 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-168863 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 20127: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (34.25s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-168863 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-168863 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (130.835504ms)

                                                
                                                
-- stdout --
	* [functional-168863] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19364
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19364-3947/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-3947/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0804 00:31:54.983500   19912 out.go:291] Setting OutFile to fd 1 ...
	I0804 00:31:54.983606   19912 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:31:54.983616   19912 out.go:304] Setting ErrFile to fd 2...
	I0804 00:31:54.983620   19912 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:31:54.983767   19912 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-3947/.minikube/bin
	I0804 00:31:54.984287   19912 out.go:298] Setting JSON to false
	I0804 00:31:54.985206   19912 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":865,"bootTime":1722730650,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 00:31:54.985265   19912 start.go:139] virtualization: kvm guest
	I0804 00:31:54.988209   19912 out.go:177] * [functional-168863] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0804 00:31:54.989732   19912 out.go:177]   - MINIKUBE_LOCATION=19364
	I0804 00:31:54.989740   19912 notify.go:220] Checking for updates...
	I0804 00:31:54.992286   19912 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 00:31:54.993749   19912 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19364-3947/kubeconfig
	I0804 00:31:54.995117   19912 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-3947/.minikube
	I0804 00:31:54.996331   19912 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 00:31:54.997635   19912 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 00:31:54.999294   19912 config.go:182] Loaded profile config "functional-168863": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0804 00:31:54.999665   19912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:31:54.999732   19912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:31:55.014941   19912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39103
	I0804 00:31:55.015318   19912 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:31:55.015859   19912 main.go:141] libmachine: Using API Version  1
	I0804 00:31:55.015885   19912 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:31:55.016237   19912 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:31:55.016461   19912 main.go:141] libmachine: (functional-168863) Calling .DriverName
	I0804 00:31:55.016781   19912 driver.go:392] Setting default libvirt URI to qemu:///system
	I0804 00:31:55.017196   19912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:31:55.017235   19912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:31:55.031562   19912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44769
	I0804 00:31:55.031990   19912 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:31:55.032481   19912 main.go:141] libmachine: Using API Version  1
	I0804 00:31:55.032501   19912 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:31:55.032808   19912 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:31:55.032981   19912 main.go:141] libmachine: (functional-168863) Calling .DriverName
	I0804 00:31:55.066871   19912 out.go:177] * Using the kvm2 driver based on existing profile
	I0804 00:31:55.068314   19912 start.go:297] selected driver: kvm2
	I0804 00:31:55.068327   19912 start.go:901] validating driver "kvm2" against &{Name:functional-168863 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-168863 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.118 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:31:55.068437   19912 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 00:31:55.070739   19912 out.go:177] 
	W0804 00:31:55.072076   19912 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0804 00:31:55.073538   19912 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-168863 --dry-run --alsologtostderr -v=1 --driver=kvm2 
--- PASS: TestFunctional/parallel/DryRun (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-168863 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-168863 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (131.987928ms)

                                                
                                                
-- stdout --
	* [functional-168863] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19364
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19364-3947/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-3947/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0804 00:31:55.244296   19967 out.go:291] Setting OutFile to fd 1 ...
	I0804 00:31:55.244430   19967 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:31:55.244440   19967 out.go:304] Setting ErrFile to fd 2...
	I0804 00:31:55.244446   19967 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:31:55.244719   19967 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-3947/.minikube/bin
	I0804 00:31:55.245227   19967 out.go:298] Setting JSON to false
	I0804 00:31:55.246130   19967 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":865,"bootTime":1722730650,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 00:31:55.246186   19967 start.go:139] virtualization: kvm guest
	I0804 00:31:55.248312   19967 out.go:177] * [functional-168863] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0804 00:31:55.249912   19967 out.go:177]   - MINIKUBE_LOCATION=19364
	I0804 00:31:55.249910   19967 notify.go:220] Checking for updates...
	I0804 00:31:55.251516   19967 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 00:31:55.252921   19967 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19364-3947/kubeconfig
	I0804 00:31:55.254487   19967 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-3947/.minikube
	I0804 00:31:55.256079   19967 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 00:31:55.257440   19967 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 00:31:55.259093   19967 config.go:182] Loaded profile config "functional-168863": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0804 00:31:55.259518   19967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:31:55.259594   19967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:31:55.274206   19967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33363
	I0804 00:31:55.274656   19967 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:31:55.275194   19967 main.go:141] libmachine: Using API Version  1
	I0804 00:31:55.275216   19967 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:31:55.275625   19967 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:31:55.275816   19967 main.go:141] libmachine: (functional-168863) Calling .DriverName
	I0804 00:31:55.276074   19967 driver.go:392] Setting default libvirt URI to qemu:///system
	I0804 00:31:55.276409   19967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:31:55.276442   19967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:31:55.291195   19967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33379
	I0804 00:31:55.291579   19967 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:31:55.292063   19967 main.go:141] libmachine: Using API Version  1
	I0804 00:31:55.292083   19967 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:31:55.292362   19967 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:31:55.292522   19967 main.go:141] libmachine: (functional-168863) Calling .DriverName
	I0804 00:31:55.325052   19967 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0804 00:31:55.326631   19967 start.go:297] selected driver: kvm2
	I0804 00:31:55.326650   19967 start.go:901] validating driver "kvm2" against &{Name:functional-168863 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-168863 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.118 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:31:55.326777   19967 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 00:31:55.328999   19967 out.go:177] 
	W0804 00:31:55.330361   19967 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0804 00:31:55.331773   19967 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-168863 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-168863 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-6r7gp" [3bc7b533-403c-4e6b-bb98-3e698d675df5] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-6r7gp" [3bc7b533-403c-4e6b-bb98-3e698d675df5] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.003903106s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.118:31546
functional_test.go:1671: http://192.168.39.118:31546: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-6r7gp

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.118:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.118:31546
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.46s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (44.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [eca11f78-81b1-4659-841a-4c951c1cb8f1] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.006752057s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-168863 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-168863 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-168863 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-168863 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-168863 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [244bddf3-0059-4c0a-8407-b51f6eb6ac2c] Pending
helpers_test.go:344: "sp-pod" [244bddf3-0059-4c0a-8407-b51f6eb6ac2c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [244bddf3-0059-4c0a-8407-b51f6eb6ac2c] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 26.003739636s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-168863 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-168863 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-168863 delete -f testdata/storage-provisioner/pod.yaml: (1.380178982s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-168863 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [dcd9d142-5b32-4105-8545-ef0068a11274] Pending
helpers_test.go:344: "sp-pod" [dcd9d142-5b32-4105-8545-ef0068a11274] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [dcd9d142-5b32-4105-8545-ef0068a11274] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.006882704s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-168863 exec sp-pod -- ls /tmp/mount
2024/08/04 00:32:29 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (44.97s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 ssh -n functional-168863 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 cp functional-168863:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1929110542/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 ssh -n functional-168863 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 ssh -n functional-168863 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (30.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-168863 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-bnqkx" [5fb2aece-c764-4e5d-bbe1-5baac70a6d7d] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-bnqkx" [5fb2aece-c764-4e5d-bbe1-5baac70a6d7d] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 29.01440182s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-168863 exec mysql-64454c8b5c-bnqkx -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-168863 exec mysql-64454c8b5c-bnqkx -- mysql -ppassword -e "show databases;": exit status 1 (299.940927ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-168863 exec mysql-64454c8b5c-bnqkx -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (30.85s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/11136/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 ssh "sudo cat /etc/test/nested/copy/11136/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/11136.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 ssh "sudo cat /etc/ssl/certs/11136.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/11136.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 ssh "sudo cat /usr/share/ca-certificates/11136.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/111362.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 ssh "sudo cat /etc/ssl/certs/111362.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/111362.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 ssh "sudo cat /usr/share/ca-certificates/111362.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-168863 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-168863 ssh "sudo systemctl is-active crio": exit status 1 (213.876921ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-168863 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-168863
docker.io/kicbase/echo-server:functional-168863
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-168863 image ls --format short --alsologtostderr:
I0804 00:32:10.871433   20276 out.go:291] Setting OutFile to fd 1 ...
I0804 00:32:10.871793   20276 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0804 00:32:10.871847   20276 out.go:304] Setting ErrFile to fd 2...
I0804 00:32:10.871865   20276 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0804 00:32:10.872335   20276 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-3947/.minikube/bin
I0804 00:32:10.873231   20276 config.go:182] Loaded profile config "functional-168863": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0804 00:32:10.873463   20276 config.go:182] Loaded profile config "functional-168863": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0804 00:32:10.873856   20276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:32:10.873904   20276 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:32:10.889289   20276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44591
I0804 00:32:10.889956   20276 main.go:141] libmachine: () Calling .GetVersion
I0804 00:32:10.890580   20276 main.go:141] libmachine: Using API Version  1
I0804 00:32:10.890599   20276 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:32:10.891092   20276 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:32:10.891303   20276 main.go:141] libmachine: (functional-168863) Calling .GetState
I0804 00:32:10.893358   20276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:32:10.893390   20276 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:32:10.907814   20276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35439
I0804 00:32:10.908300   20276 main.go:141] libmachine: () Calling .GetVersion
I0804 00:32:10.908785   20276 main.go:141] libmachine: Using API Version  1
I0804 00:32:10.908807   20276 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:32:10.909125   20276 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:32:10.909318   20276 main.go:141] libmachine: (functional-168863) Calling .DriverName
I0804 00:32:10.909526   20276 ssh_runner.go:195] Run: systemctl --version
I0804 00:32:10.909554   20276 main.go:141] libmachine: (functional-168863) Calling .GetSSHHostname
I0804 00:32:10.912166   20276 main.go:141] libmachine: (functional-168863) DBG | domain functional-168863 has defined MAC address 52:54:00:b1:03:3c in network mk-functional-168863
I0804 00:32:10.912687   20276 main.go:141] libmachine: (functional-168863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:03:3c", ip: ""} in network mk-functional-168863: {Iface:virbr1 ExpiryTime:2024-08-04 01:28:54 +0000 UTC Type:0 Mac:52:54:00:b1:03:3c Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:functional-168863 Clientid:01:52:54:00:b1:03:3c}
I0804 00:32:10.912715   20276 main.go:141] libmachine: (functional-168863) DBG | domain functional-168863 has defined IP address 192.168.39.118 and MAC address 52:54:00:b1:03:3c in network mk-functional-168863
I0804 00:32:10.912794   20276 main.go:141] libmachine: (functional-168863) Calling .GetSSHPort
I0804 00:32:10.912946   20276 main.go:141] libmachine: (functional-168863) Calling .GetSSHKeyPath
I0804 00:32:10.913106   20276 main.go:141] libmachine: (functional-168863) Calling .GetSSHUsername
I0804 00:32:10.913271   20276 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/functional-168863/id_rsa Username:docker}
I0804 00:32:11.009404   20276 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0804 00:32:11.087012   20276 main.go:141] libmachine: Making call to close driver server
I0804 00:32:11.087039   20276 main.go:141] libmachine: (functional-168863) Calling .Close
I0804 00:32:11.087312   20276 main.go:141] libmachine: Successfully made call to close driver server
I0804 00:32:11.087339   20276 main.go:141] libmachine: Making call to close connection to plugin binary
I0804 00:32:11.087345   20276 main.go:141] libmachine: (functional-168863) DBG | Closing plugin on server side
I0804 00:32:11.087353   20276 main.go:141] libmachine: Making call to close driver server
I0804 00:32:11.087365   20276 main.go:141] libmachine: (functional-168863) Calling .Close
I0804 00:32:11.087627   20276 main.go:141] libmachine: (functional-168863) DBG | Closing plugin on server side
I0804 00:32:11.087639   20276 main.go:141] libmachine: Successfully made call to close driver server
I0804 00:32:11.087653   20276 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-168863 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/kube-proxy                  | v1.30.3           | 55bb025d2cfa5 | 84.7MB |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| docker.io/kicbase/echo-server               | functional-168863 | 9056ab77afb8e | 4.94MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| docker.io/localhost/my-image                | functional-168863 | 0f566413cb5ce | 1.24MB |
| docker.io/library/minikube-local-cache-test | functional-168863 | 2cb6aa23a7e5b | 30B    |
| registry.k8s.io/etcd                        | 3.5.12-0          | 3861cfcd7c04c | 149MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/kube-apiserver              | v1.30.3           | 1f6d574d502f3 | 117MB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/kube-scheduler              | v1.30.3           | 3edc18e7b7672 | 62MB   |
| registry.k8s.io/kube-controller-manager     | v1.30.3           | 76932a3b37d7e | 111MB  |
| docker.io/library/nginx                     | latest            | a72860cb95fd5 | 188MB  |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-168863 image ls --format table --alsologtostderr:
I0804 00:32:15.094800   20774 out.go:291] Setting OutFile to fd 1 ...
I0804 00:32:15.095106   20774 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0804 00:32:15.095120   20774 out.go:304] Setting ErrFile to fd 2...
I0804 00:32:15.095128   20774 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0804 00:32:15.095387   20774 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-3947/.minikube/bin
I0804 00:32:15.096106   20774 config.go:182] Loaded profile config "functional-168863": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0804 00:32:15.096263   20774 config.go:182] Loaded profile config "functional-168863": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0804 00:32:15.096825   20774 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:32:15.096883   20774 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:32:15.111260   20774 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38573
I0804 00:32:15.111669   20774 main.go:141] libmachine: () Calling .GetVersion
I0804 00:32:15.112280   20774 main.go:141] libmachine: Using API Version  1
I0804 00:32:15.112306   20774 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:32:15.112637   20774 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:32:15.112837   20774 main.go:141] libmachine: (functional-168863) Calling .GetState
I0804 00:32:15.114628   20774 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:32:15.114666   20774 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:32:15.129469   20774 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38951
I0804 00:32:15.129861   20774 main.go:141] libmachine: () Calling .GetVersion
I0804 00:32:15.130377   20774 main.go:141] libmachine: Using API Version  1
I0804 00:32:15.130402   20774 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:32:15.130771   20774 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:32:15.130985   20774 main.go:141] libmachine: (functional-168863) Calling .DriverName
I0804 00:32:15.131198   20774 ssh_runner.go:195] Run: systemctl --version
I0804 00:32:15.131225   20774 main.go:141] libmachine: (functional-168863) Calling .GetSSHHostname
I0804 00:32:15.134027   20774 main.go:141] libmachine: (functional-168863) DBG | domain functional-168863 has defined MAC address 52:54:00:b1:03:3c in network mk-functional-168863
I0804 00:32:15.134472   20774 main.go:141] libmachine: (functional-168863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:03:3c", ip: ""} in network mk-functional-168863: {Iface:virbr1 ExpiryTime:2024-08-04 01:28:54 +0000 UTC Type:0 Mac:52:54:00:b1:03:3c Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:functional-168863 Clientid:01:52:54:00:b1:03:3c}
I0804 00:32:15.134509   20774 main.go:141] libmachine: (functional-168863) DBG | domain functional-168863 has defined IP address 192.168.39.118 and MAC address 52:54:00:b1:03:3c in network mk-functional-168863
I0804 00:32:15.134697   20774 main.go:141] libmachine: (functional-168863) Calling .GetSSHPort
I0804 00:32:15.134841   20774 main.go:141] libmachine: (functional-168863) Calling .GetSSHKeyPath
I0804 00:32:15.134999   20774 main.go:141] libmachine: (functional-168863) Calling .GetSSHUsername
I0804 00:32:15.135104   20774 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/functional-168863/id_rsa Username:docker}
I0804 00:32:15.227147   20774 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0804 00:32:15.277692   20774 main.go:141] libmachine: Making call to close driver server
I0804 00:32:15.277710   20774 main.go:141] libmachine: (functional-168863) Calling .Close
I0804 00:32:15.278044   20774 main.go:141] libmachine: Successfully made call to close driver server
I0804 00:32:15.278071   20774 main.go:141] libmachine: Making call to close connection to plugin binary
I0804 00:32:15.278078   20774 main.go:141] libmachine: Making call to close driver server
I0804 00:32:15.278087   20774 main.go:141] libmachine: (functional-168863) Calling .Close
I0804 00:32:15.278334   20774 main.go:141] libmachine: Successfully made call to close driver server
I0804 00:32:15.278351   20774 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-168863 image ls --format json --alsologtostderr:
[{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"62000000"},{"id":"a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c",
"repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"111000000"},{"id":"2cb6aa23a7e5b96b1aed102d25309d2365f5839371efe89b348f05c7cd43fc2e","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-168863"],"size":"30"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"0f566413cb5ce1a29a1e0fc491718c50833483a715de65ec79b8d85488c9c4bb","repoDigests":[],"repoTags":["docker.io/localho
st/my-image:functional-168863"],"size":"1240000"},{"id":"55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"84700000"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"149000000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-168863"],"size":"4940000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"117000000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-168863 image ls --format json --alsologtostderr:
I0804 00:32:14.895894   20750 out.go:291] Setting OutFile to fd 1 ...
I0804 00:32:14.896002   20750 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0804 00:32:14.896011   20750 out.go:304] Setting ErrFile to fd 2...
I0804 00:32:14.896015   20750 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0804 00:32:14.896225   20750 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-3947/.minikube/bin
I0804 00:32:14.896771   20750 config.go:182] Loaded profile config "functional-168863": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0804 00:32:14.896878   20750 config.go:182] Loaded profile config "functional-168863": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0804 00:32:14.897250   20750 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:32:14.897305   20750 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:32:14.911865   20750 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45099
I0804 00:32:14.912298   20750 main.go:141] libmachine: () Calling .GetVersion
I0804 00:32:14.912849   20750 main.go:141] libmachine: Using API Version  1
I0804 00:32:14.912878   20750 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:32:14.913164   20750 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:32:14.913351   20750 main.go:141] libmachine: (functional-168863) Calling .GetState
I0804 00:32:14.915050   20750 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:32:14.915088   20750 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:32:14.929081   20750 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43683
I0804 00:32:14.929405   20750 main.go:141] libmachine: () Calling .GetVersion
I0804 00:32:14.929911   20750 main.go:141] libmachine: Using API Version  1
I0804 00:32:14.929949   20750 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:32:14.930294   20750 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:32:14.930482   20750 main.go:141] libmachine: (functional-168863) Calling .DriverName
I0804 00:32:14.930691   20750 ssh_runner.go:195] Run: systemctl --version
I0804 00:32:14.930718   20750 main.go:141] libmachine: (functional-168863) Calling .GetSSHHostname
I0804 00:32:14.933516   20750 main.go:141] libmachine: (functional-168863) DBG | domain functional-168863 has defined MAC address 52:54:00:b1:03:3c in network mk-functional-168863
I0804 00:32:14.933927   20750 main.go:141] libmachine: (functional-168863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:03:3c", ip: ""} in network mk-functional-168863: {Iface:virbr1 ExpiryTime:2024-08-04 01:28:54 +0000 UTC Type:0 Mac:52:54:00:b1:03:3c Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:functional-168863 Clientid:01:52:54:00:b1:03:3c}
I0804 00:32:14.933956   20750 main.go:141] libmachine: (functional-168863) DBG | domain functional-168863 has defined IP address 192.168.39.118 and MAC address 52:54:00:b1:03:3c in network mk-functional-168863
I0804 00:32:14.934065   20750 main.go:141] libmachine: (functional-168863) Calling .GetSSHPort
I0804 00:32:14.934299   20750 main.go:141] libmachine: (functional-168863) Calling .GetSSHKeyPath
I0804 00:32:14.934417   20750 main.go:141] libmachine: (functional-168863) Calling .GetSSHUsername
I0804 00:32:14.934562   20750 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/functional-168863/id_rsa Username:docker}
I0804 00:32:15.021206   20750 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0804 00:32:15.046912   20750 main.go:141] libmachine: Making call to close driver server
I0804 00:32:15.046924   20750 main.go:141] libmachine: (functional-168863) Calling .Close
I0804 00:32:15.047180   20750 main.go:141] libmachine: Successfully made call to close driver server
I0804 00:32:15.047203   20750 main.go:141] libmachine: Making call to close connection to plugin binary
I0804 00:32:15.047212   20750 main.go:141] libmachine: Making call to close driver server
I0804 00:32:15.047217   20750 main.go:141] libmachine: (functional-168863) DBG | Closing plugin on server side
I0804 00:32:15.047219   20750 main.go:141] libmachine: (functional-168863) Calling .Close
I0804 00:32:15.047450   20750 main.go:141] libmachine: Successfully made call to close driver server
I0804 00:32:15.047469   20750 main.go:141] libmachine: Making call to close connection to plugin binary
I0804 00:32:15.047473   20750 main.go:141] libmachine: (functional-168863) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-168863 image ls --format yaml --alsologtostderr:
- id: 76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "111000000"
- id: 55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "84700000"
- id: a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "149000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-168863
size: "4940000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "117000000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "62000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 2cb6aa23a7e5b96b1aed102d25309d2365f5839371efe89b348f05c7cd43fc2e
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-168863
size: "30"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-168863 image ls --format yaml --alsologtostderr:
I0804 00:32:11.139141   20300 out.go:291] Setting OutFile to fd 1 ...
I0804 00:32:11.139247   20300 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0804 00:32:11.139256   20300 out.go:304] Setting ErrFile to fd 2...
I0804 00:32:11.139260   20300 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0804 00:32:11.139477   20300 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-3947/.minikube/bin
I0804 00:32:11.140045   20300 config.go:182] Loaded profile config "functional-168863": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0804 00:32:11.140188   20300 config.go:182] Loaded profile config "functional-168863": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0804 00:32:11.140552   20300 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:32:11.140596   20300 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:32:11.156485   20300 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40507
I0804 00:32:11.157028   20300 main.go:141] libmachine: () Calling .GetVersion
I0804 00:32:11.157601   20300 main.go:141] libmachine: Using API Version  1
I0804 00:32:11.157625   20300 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:32:11.157951   20300 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:32:11.158149   20300 main.go:141] libmachine: (functional-168863) Calling .GetState
I0804 00:32:11.160080   20300 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:32:11.160135   20300 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:32:11.174721   20300 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37195
I0804 00:32:11.175121   20300 main.go:141] libmachine: () Calling .GetVersion
I0804 00:32:11.175559   20300 main.go:141] libmachine: Using API Version  1
I0804 00:32:11.175580   20300 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:32:11.175944   20300 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:32:11.176105   20300 main.go:141] libmachine: (functional-168863) Calling .DriverName
I0804 00:32:11.176299   20300 ssh_runner.go:195] Run: systemctl --version
I0804 00:32:11.176324   20300 main.go:141] libmachine: (functional-168863) Calling .GetSSHHostname
I0804 00:32:11.179140   20300 main.go:141] libmachine: (functional-168863) DBG | domain functional-168863 has defined MAC address 52:54:00:b1:03:3c in network mk-functional-168863
I0804 00:32:11.179564   20300 main.go:141] libmachine: (functional-168863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:03:3c", ip: ""} in network mk-functional-168863: {Iface:virbr1 ExpiryTime:2024-08-04 01:28:54 +0000 UTC Type:0 Mac:52:54:00:b1:03:3c Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:functional-168863 Clientid:01:52:54:00:b1:03:3c}
I0804 00:32:11.179597   20300 main.go:141] libmachine: (functional-168863) DBG | domain functional-168863 has defined IP address 192.168.39.118 and MAC address 52:54:00:b1:03:3c in network mk-functional-168863
I0804 00:32:11.179724   20300 main.go:141] libmachine: (functional-168863) Calling .GetSSHPort
I0804 00:32:11.179912   20300 main.go:141] libmachine: (functional-168863) Calling .GetSSHKeyPath
I0804 00:32:11.180104   20300 main.go:141] libmachine: (functional-168863) Calling .GetSSHUsername
I0804 00:32:11.180285   20300 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/functional-168863/id_rsa Username:docker}
I0804 00:32:11.294758   20300 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0804 00:32:11.327101   20300 main.go:141] libmachine: Making call to close driver server
I0804 00:32:11.327115   20300 main.go:141] libmachine: (functional-168863) Calling .Close
I0804 00:32:11.327400   20300 main.go:141] libmachine: (functional-168863) DBG | Closing plugin on server side
I0804 00:32:11.327465   20300 main.go:141] libmachine: Successfully made call to close driver server
I0804 00:32:11.327481   20300 main.go:141] libmachine: Making call to close connection to plugin binary
I0804 00:32:11.327496   20300 main.go:141] libmachine: Making call to close driver server
I0804 00:32:11.327508   20300 main.go:141] libmachine: (functional-168863) Calling .Close
I0804 00:32:11.327735   20300 main.go:141] libmachine: Successfully made call to close driver server
I0804 00:32:11.327762   20300 main.go:141] libmachine: (functional-168863) DBG | Closing plugin on server side
I0804 00:32:11.327775   20300 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-168863 ssh pgrep buildkitd: exit status 1 (196.053058ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 image build -t localhost/my-image:functional-168863 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-168863 image build -t localhost/my-image:functional-168863 testdata/build --alsologtostderr: (3.119865445s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-168863 image build -t localhost/my-image:functional-168863 testdata/build --alsologtostderr:
I0804 00:32:11.578090   20354 out.go:291] Setting OutFile to fd 1 ...
I0804 00:32:11.578251   20354 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0804 00:32:11.578268   20354 out.go:304] Setting ErrFile to fd 2...
I0804 00:32:11.578275   20354 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0804 00:32:11.578430   20354 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-3947/.minikube/bin
I0804 00:32:11.578933   20354 config.go:182] Loaded profile config "functional-168863": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0804 00:32:11.579474   20354 config.go:182] Loaded profile config "functional-168863": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0804 00:32:11.580037   20354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:32:11.580109   20354 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:32:11.594504   20354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42663
I0804 00:32:11.594994   20354 main.go:141] libmachine: () Calling .GetVersion
I0804 00:32:11.595529   20354 main.go:141] libmachine: Using API Version  1
I0804 00:32:11.595543   20354 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:32:11.595863   20354 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:32:11.596047   20354 main.go:141] libmachine: (functional-168863) Calling .GetState
I0804 00:32:11.597747   20354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:32:11.597784   20354 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:32:11.611579   20354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42411
I0804 00:32:11.611931   20354 main.go:141] libmachine: () Calling .GetVersion
I0804 00:32:11.612387   20354 main.go:141] libmachine: Using API Version  1
I0804 00:32:11.612435   20354 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:32:11.612704   20354 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:32:11.612862   20354 main.go:141] libmachine: (functional-168863) Calling .DriverName
I0804 00:32:11.613048   20354 ssh_runner.go:195] Run: systemctl --version
I0804 00:32:11.613071   20354 main.go:141] libmachine: (functional-168863) Calling .GetSSHHostname
I0804 00:32:11.615741   20354 main.go:141] libmachine: (functional-168863) DBG | domain functional-168863 has defined MAC address 52:54:00:b1:03:3c in network mk-functional-168863
I0804 00:32:11.616185   20354 main.go:141] libmachine: (functional-168863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:03:3c", ip: ""} in network mk-functional-168863: {Iface:virbr1 ExpiryTime:2024-08-04 01:28:54 +0000 UTC Type:0 Mac:52:54:00:b1:03:3c Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:functional-168863 Clientid:01:52:54:00:b1:03:3c}
I0804 00:32:11.616216   20354 main.go:141] libmachine: (functional-168863) DBG | domain functional-168863 has defined IP address 192.168.39.118 and MAC address 52:54:00:b1:03:3c in network mk-functional-168863
I0804 00:32:11.616316   20354 main.go:141] libmachine: (functional-168863) Calling .GetSSHPort
I0804 00:32:11.616478   20354 main.go:141] libmachine: (functional-168863) Calling .GetSSHKeyPath
I0804 00:32:11.616688   20354 main.go:141] libmachine: (functional-168863) Calling .GetSSHUsername
I0804 00:32:11.616829   20354 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/functional-168863/id_rsa Username:docker}
I0804 00:32:11.714746   20354 build_images.go:161] Building image from path: /tmp/build.2626864216.tar
I0804 00:32:11.714814   20354 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0804 00:32:11.726918   20354 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2626864216.tar
I0804 00:32:11.732924   20354 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2626864216.tar: stat -c "%s %y" /var/lib/minikube/build/build.2626864216.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2626864216.tar': No such file or directory
I0804 00:32:11.732955   20354 ssh_runner.go:362] scp /tmp/build.2626864216.tar --> /var/lib/minikube/build/build.2626864216.tar (3072 bytes)
I0804 00:32:11.768868   20354 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2626864216
I0804 00:32:11.780446   20354 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2626864216 -xf /var/lib/minikube/build/build.2626864216.tar
I0804 00:32:11.802194   20354 docker.go:360] Building image: /var/lib/minikube/build/build.2626864216
I0804 00:32:11.802315   20354 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-168863 /var/lib/minikube/build/build.2626864216
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.2s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.4s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:0f566413cb5ce1a29a1e0fc491718c50833483a715de65ec79b8d85488c9c4bb done
#8 naming to localhost/my-image:functional-168863 done
#8 DONE 0.1s
I0804 00:32:14.596797   20354 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-168863 /var/lib/minikube/build/build.2626864216: (2.794455794s)
I0804 00:32:14.596856   20354 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2626864216
I0804 00:32:14.633522   20354 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2626864216.tar
I0804 00:32:14.651286   20354 build_images.go:217] Built localhost/my-image:functional-168863 from /tmp/build.2626864216.tar
I0804 00:32:14.651310   20354 build_images.go:133] succeeded building to: functional-168863
I0804 00:32:14.651314   20354 build_images.go:134] failed building to: 
I0804 00:32:14.651332   20354 main.go:141] libmachine: Making call to close driver server
I0804 00:32:14.651354   20354 main.go:141] libmachine: (functional-168863) Calling .Close
I0804 00:32:14.651657   20354 main.go:141] libmachine: (functional-168863) DBG | Closing plugin on server side
I0804 00:32:14.651666   20354 main.go:141] libmachine: Successfully made call to close driver server
I0804 00:32:14.651681   20354 main.go:141] libmachine: Making call to close connection to plugin binary
I0804 00:32:14.651690   20354 main.go:141] libmachine: Making call to close driver server
I0804 00:32:14.651698   20354 main.go:141] libmachine: (functional-168863) Calling .Close
I0804 00:32:14.652000   20354 main.go:141] libmachine: Successfully made call to close driver server
I0804 00:32:14.652020   20354 main.go:141] libmachine: Making call to close connection to plugin binary
I0804 00:32:14.652032   20354 main.go:141] libmachine: (functional-168863) DBG | Closing plugin on server side
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.499066988s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-168863
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-168863 docker-env) && out/minikube-linux-amd64 status -p functional-168863"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-168863 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-168863 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-168863 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-z6tgw" [2ccad4dc-c2eb-4e10-8d58-c7489a51a83e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-z6tgw" [2ccad4dc-c2eb-4e10-8d58-c7489a51a83e] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.004387101s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 image load --daemon docker.io/kicbase/echo-server:functional-168863 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 image load --daemon docker.io/kicbase/echo-server:functional-168863 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-168863
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 image load --daemon docker.io/kicbase/echo-server:functional-168863 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 image save docker.io/kicbase/echo-server:functional-168863 /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 image rm docker.io/kicbase/echo-server:functional-168863 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 image load /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-168863
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 image save --daemon docker.io/kicbase/echo-server:functional-168863 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-168863
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 service list -o json
functional_test.go:1490: Took "439.401054ms" to run "out/minikube-linux-amd64 -p functional-168863 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.118:30451
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.118:30451
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "247.422384ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "43.682845ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "298.596578ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "46.220152ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (17.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-168863 /tmp/TestFunctionalparallelMountCmdany-port3386128291/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1722731514721563413" to /tmp/TestFunctionalparallelMountCmdany-port3386128291/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1722731514721563413" to /tmp/TestFunctionalparallelMountCmdany-port3386128291/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1722731514721563413" to /tmp/TestFunctionalparallelMountCmdany-port3386128291/001/test-1722731514721563413
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-168863 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (240.784372ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug  4 00:31 created-by-test
-rw-r--r-- 1 docker docker 24 Aug  4 00:31 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug  4 00:31 test-1722731514721563413
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 ssh cat /mount-9p/test-1722731514721563413
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-168863 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [d57be6f9-a07f-430d-9f9c-aa19dad88e2e] Pending
helpers_test.go:344: "busybox-mount" [d57be6f9-a07f-430d-9f9c-aa19dad88e2e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [d57be6f9-a07f-430d-9f9c-aa19dad88e2e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [d57be6f9-a07f-430d-9f9c-aa19dad88e2e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 15.005249025s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-168863 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-168863 /tmp/TestFunctionalparallelMountCmdany-port3386128291/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (17.90s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-168863 /tmp/TestFunctionalparallelMountCmdspecific-port3589077706/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-168863 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (221.482531ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-168863 /tmp/TestFunctionalparallelMountCmdspecific-port3589077706/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-168863 ssh "sudo umount -f /mount-9p": exit status 1 (214.294149ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-168863 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-168863 /tmp/TestFunctionalparallelMountCmdspecific-port3589077706/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.60s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-168863 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1924443912/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-168863 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1924443912/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-168863 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1924443912/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-168863 ssh "findmnt -T" /mount1: exit status 1 (271.818661ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-168863 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-168863 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-168863 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1924443912/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-168863 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1924443912/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-168863 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1924443912/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.62s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-168863
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-168863
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-168863
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestGvisorAddon (239.46s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-169607 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:52: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-169607 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m40.131922792s)
gvisor_addon_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-169607 cache add gcr.io/k8s-minikube/gvisor-addon:2
gvisor_addon_test.go:58: (dbg) Done: out/minikube-linux-amd64 -p gvisor-169607 cache add gcr.io/k8s-minikube/gvisor-addon:2: (22.764316915s)
gvisor_addon_test.go:63: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-169607 addons enable gvisor
gvisor_addon_test.go:63: (dbg) Done: out/minikube-linux-amd64 -p gvisor-169607 addons enable gvisor: (3.625326707s)
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [2dd63d43-c3ce-4dd9-a3bb-30ce59eef2e3] Running
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.004457309s
gvisor_addon_test.go:73: (dbg) Run:  kubectl --context gvisor-169607 replace --force -f testdata/nginx-gvisor.yaml
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [68150265-4b5c-49bc-bc14-579014dcd6cc] Pending
helpers_test.go:344: "nginx-gvisor" [68150265-4b5c-49bc-bc14-579014dcd6cc] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-gvisor" [68150265-4b5c-49bc-bc14-579014dcd6cc] Running
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 51.004422193s
gvisor_addon_test.go:83: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-169607
gvisor_addon_test.go:83: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-169607: (7.291407646s)
gvisor_addon_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-169607 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-169607 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (36.373976889s)
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [2dd63d43-c3ce-4dd9-a3bb-30ce59eef2e3] Running / Ready:ContainersNotReady (containers with unready status: [gvisor]) / ContainersReady:ContainersNotReady (containers with unready status: [gvisor])
helpers_test.go:344: "gvisor" [2dd63d43-c3ce-4dd9-a3bb-30ce59eef2e3] Running
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.007630696s
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [68150265-4b5c-49bc-bc14-579014dcd6cc] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.004141609s
helpers_test.go:175: Cleaning up "gvisor-169607" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-169607
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p gvisor-169607: (1.068209896s)
--- PASS: TestGvisorAddon (239.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (230.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-230158 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2 
E0804 00:32:50.834772   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/addons-044946/client.crt: no such file or directory
E0804 00:35:06.989878   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/addons-044946/client.crt: no such file or directory
E0804 00:35:34.675834   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/addons-044946/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-230158 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2 : (3m49.559141143s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (230.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-230158 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-230158 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-230158 -- rollout status deployment/busybox: (3.237555894s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-230158 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-230158 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-230158 -- exec busybox-fc5497c4f-v69qb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-230158 -- exec busybox-fc5497c4f-zdhsb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-230158 -- exec busybox-fc5497c4f-zkdbc -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-230158 -- exec busybox-fc5497c4f-v69qb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-230158 -- exec busybox-fc5497c4f-zdhsb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-230158 -- exec busybox-fc5497c4f-zkdbc -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-230158 -- exec busybox-fc5497c4f-v69qb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-230158 -- exec busybox-fc5497c4f-zdhsb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-230158 -- exec busybox-fc5497c4f-zkdbc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-230158 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-230158 -- exec busybox-fc5497c4f-v69qb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-230158 -- exec busybox-fc5497c4f-v69qb -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-230158 -- exec busybox-fc5497c4f-zdhsb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-230158 -- exec busybox-fc5497c4f-zdhsb -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-230158 -- exec busybox-fc5497c4f-zkdbc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-230158 -- exec busybox-fc5497c4f-zkdbc -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (63.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-230158 -v=7 --alsologtostderr
E0804 00:36:39.911443   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/functional-168863/client.crt: no such file or directory
E0804 00:36:39.916755   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/functional-168863/client.crt: no such file or directory
E0804 00:36:39.927079   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/functional-168863/client.crt: no such file or directory
E0804 00:36:39.947369   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/functional-168863/client.crt: no such file or directory
E0804 00:36:39.987679   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/functional-168863/client.crt: no such file or directory
E0804 00:36:40.068027   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/functional-168863/client.crt: no such file or directory
E0804 00:36:40.228513   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/functional-168863/client.crt: no such file or directory
E0804 00:36:40.548914   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/functional-168863/client.crt: no such file or directory
E0804 00:36:41.189678   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/functional-168863/client.crt: no such file or directory
E0804 00:36:42.469868   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/functional-168863/client.crt: no such file or directory
E0804 00:36:45.030366   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/functional-168863/client.crt: no such file or directory
E0804 00:36:50.150783   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/functional-168863/client.crt: no such file or directory
E0804 00:37:00.391550   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/functional-168863/client.crt: no such file or directory
E0804 00:37:20.872699   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/functional-168863/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-230158 -v=7 --alsologtostderr: (1m2.525723933s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (63.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-230158 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 cp testdata/cp-test.txt ha-230158:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 ssh -n ha-230158 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 cp ha-230158:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile571222237/001/cp-test_ha-230158.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 ssh -n ha-230158 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 cp ha-230158:/home/docker/cp-test.txt ha-230158-m02:/home/docker/cp-test_ha-230158_ha-230158-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 ssh -n ha-230158 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 ssh -n ha-230158-m02 "sudo cat /home/docker/cp-test_ha-230158_ha-230158-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 cp ha-230158:/home/docker/cp-test.txt ha-230158-m03:/home/docker/cp-test_ha-230158_ha-230158-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 ssh -n ha-230158 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 ssh -n ha-230158-m03 "sudo cat /home/docker/cp-test_ha-230158_ha-230158-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 cp ha-230158:/home/docker/cp-test.txt ha-230158-m04:/home/docker/cp-test_ha-230158_ha-230158-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 ssh -n ha-230158 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 ssh -n ha-230158-m04 "sudo cat /home/docker/cp-test_ha-230158_ha-230158-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 cp testdata/cp-test.txt ha-230158-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 ssh -n ha-230158-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 cp ha-230158-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile571222237/001/cp-test_ha-230158-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 ssh -n ha-230158-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 cp ha-230158-m02:/home/docker/cp-test.txt ha-230158:/home/docker/cp-test_ha-230158-m02_ha-230158.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 ssh -n ha-230158-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 ssh -n ha-230158 "sudo cat /home/docker/cp-test_ha-230158-m02_ha-230158.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 cp ha-230158-m02:/home/docker/cp-test.txt ha-230158-m03:/home/docker/cp-test_ha-230158-m02_ha-230158-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 ssh -n ha-230158-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 ssh -n ha-230158-m03 "sudo cat /home/docker/cp-test_ha-230158-m02_ha-230158-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 cp ha-230158-m02:/home/docker/cp-test.txt ha-230158-m04:/home/docker/cp-test_ha-230158-m02_ha-230158-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 ssh -n ha-230158-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 ssh -n ha-230158-m04 "sudo cat /home/docker/cp-test_ha-230158-m02_ha-230158-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 cp testdata/cp-test.txt ha-230158-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 ssh -n ha-230158-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 cp ha-230158-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile571222237/001/cp-test_ha-230158-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 ssh -n ha-230158-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 cp ha-230158-m03:/home/docker/cp-test.txt ha-230158:/home/docker/cp-test_ha-230158-m03_ha-230158.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 ssh -n ha-230158-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 ssh -n ha-230158 "sudo cat /home/docker/cp-test_ha-230158-m03_ha-230158.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 cp ha-230158-m03:/home/docker/cp-test.txt ha-230158-m02:/home/docker/cp-test_ha-230158-m03_ha-230158-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 ssh -n ha-230158-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 ssh -n ha-230158-m02 "sudo cat /home/docker/cp-test_ha-230158-m03_ha-230158-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 cp ha-230158-m03:/home/docker/cp-test.txt ha-230158-m04:/home/docker/cp-test_ha-230158-m03_ha-230158-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 ssh -n ha-230158-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 ssh -n ha-230158-m04 "sudo cat /home/docker/cp-test_ha-230158-m03_ha-230158-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 cp testdata/cp-test.txt ha-230158-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 ssh -n ha-230158-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 cp ha-230158-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile571222237/001/cp-test_ha-230158-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 ssh -n ha-230158-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 cp ha-230158-m04:/home/docker/cp-test.txt ha-230158:/home/docker/cp-test_ha-230158-m04_ha-230158.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 ssh -n ha-230158-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 ssh -n ha-230158 "sudo cat /home/docker/cp-test_ha-230158-m04_ha-230158.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 cp ha-230158-m04:/home/docker/cp-test.txt ha-230158-m02:/home/docker/cp-test_ha-230158-m04_ha-230158-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 ssh -n ha-230158-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 ssh -n ha-230158-m02 "sudo cat /home/docker/cp-test_ha-230158-m04_ha-230158-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 cp ha-230158-m04:/home/docker/cp-test.txt ha-230158-m03:/home/docker/cp-test_ha-230158-m04_ha-230158-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 ssh -n ha-230158-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 ssh -n ha-230158-m03 "sudo cat /home/docker/cp-test_ha-230158-m04_ha-230158-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-230158 node stop m02 -v=7 --alsologtostderr: (13.306427654s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-230158 status -v=7 --alsologtostderr: exit status 7 (580.489449ms)

                                                
                                                
-- stdout --
	ha-230158
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-230158-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-230158-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-230158-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0804 00:37:57.369227   25370 out.go:291] Setting OutFile to fd 1 ...
	I0804 00:37:57.369493   25370 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:37:57.369504   25370 out.go:304] Setting ErrFile to fd 2...
	I0804 00:37:57.369508   25370 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:37:57.369697   25370 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-3947/.minikube/bin
	I0804 00:37:57.369846   25370 out.go:298] Setting JSON to false
	I0804 00:37:57.369867   25370 mustload.go:65] Loading cluster: ha-230158
	I0804 00:37:57.370251   25370 notify.go:220] Checking for updates...
	I0804 00:37:57.371130   25370 config.go:182] Loaded profile config "ha-230158": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0804 00:37:57.371297   25370 status.go:255] checking status of ha-230158 ...
	I0804 00:37:57.371722   25370 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:37:57.371780   25370 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:37:57.386983   25370 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35089
	I0804 00:37:57.387315   25370 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:37:57.387835   25370 main.go:141] libmachine: Using API Version  1
	I0804 00:37:57.387853   25370 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:37:57.388203   25370 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:37:57.388424   25370 main.go:141] libmachine: (ha-230158) Calling .GetState
	I0804 00:37:57.390121   25370 status.go:330] ha-230158 host status = "Running" (err=<nil>)
	I0804 00:37:57.390138   25370 host.go:66] Checking if "ha-230158" exists ...
	I0804 00:37:57.390434   25370 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:37:57.390470   25370 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:37:57.403898   25370 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40465
	I0804 00:37:57.404363   25370 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:37:57.404790   25370 main.go:141] libmachine: Using API Version  1
	I0804 00:37:57.404810   25370 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:37:57.405118   25370 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:37:57.405295   25370 main.go:141] libmachine: (ha-230158) Calling .GetIP
	I0804 00:37:57.407937   25370 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:37:57.408382   25370 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
	I0804 00:37:57.408407   25370 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:37:57.408565   25370 host.go:66] Checking if "ha-230158" exists ...
	I0804 00:37:57.408886   25370 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:37:57.408920   25370 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:37:57.423023   25370 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33235
	I0804 00:37:57.423443   25370 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:37:57.423878   25370 main.go:141] libmachine: Using API Version  1
	I0804 00:37:57.423902   25370 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:37:57.424266   25370 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:37:57.424442   25370 main.go:141] libmachine: (ha-230158) Calling .DriverName
	I0804 00:37:57.424621   25370 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 00:37:57.424649   25370 main.go:141] libmachine: (ha-230158) Calling .GetSSHHostname
	I0804 00:37:57.427309   25370 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:37:57.427715   25370 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
	I0804 00:37:57.427743   25370 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
	I0804 00:37:57.427905   25370 main.go:141] libmachine: (ha-230158) Calling .GetSSHPort
	I0804 00:37:57.428059   25370 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
	I0804 00:37:57.428207   25370 main.go:141] libmachine: (ha-230158) Calling .GetSSHUsername
	I0804 00:37:57.428349   25370 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158/id_rsa Username:docker}
	I0804 00:37:57.510440   25370 ssh_runner.go:195] Run: systemctl --version
	I0804 00:37:57.516469   25370 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:37:57.530678   25370 kubeconfig.go:125] found "ha-230158" server: "https://192.168.39.254:8443"
	I0804 00:37:57.530700   25370 api_server.go:166] Checking apiserver status ...
	I0804 00:37:57.530725   25370 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:37:57.544633   25370 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2004/cgroup
	W0804 00:37:57.553910   25370 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2004/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0804 00:37:57.553963   25370 ssh_runner.go:195] Run: ls
	I0804 00:37:57.558853   25370 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0804 00:37:57.562961   25370 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0804 00:37:57.562979   25370 status.go:422] ha-230158 apiserver status = Running (err=<nil>)
	I0804 00:37:57.562987   25370 status.go:257] ha-230158 status: &{Name:ha-230158 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 00:37:57.563008   25370 status.go:255] checking status of ha-230158-m02 ...
	I0804 00:37:57.563287   25370 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:37:57.563317   25370 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:37:57.577814   25370 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35439
	I0804 00:37:57.578249   25370 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:37:57.578701   25370 main.go:141] libmachine: Using API Version  1
	I0804 00:37:57.578721   25370 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:37:57.579067   25370 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:37:57.579276   25370 main.go:141] libmachine: (ha-230158-m02) Calling .GetState
	I0804 00:37:57.580875   25370 status.go:330] ha-230158-m02 host status = "Stopped" (err=<nil>)
	I0804 00:37:57.580889   25370 status.go:343] host is not running, skipping remaining checks
	I0804 00:37:57.580896   25370 status.go:257] ha-230158-m02 status: &{Name:ha-230158-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 00:37:57.580915   25370 status.go:255] checking status of ha-230158-m03 ...
	I0804 00:37:57.581203   25370 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:37:57.581245   25370 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:37:57.595759   25370 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44347
	I0804 00:37:57.596132   25370 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:37:57.596616   25370 main.go:141] libmachine: Using API Version  1
	I0804 00:37:57.596636   25370 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:37:57.596966   25370 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:37:57.597175   25370 main.go:141] libmachine: (ha-230158-m03) Calling .GetState
	I0804 00:37:57.598687   25370 status.go:330] ha-230158-m03 host status = "Running" (err=<nil>)
	I0804 00:37:57.598713   25370 host.go:66] Checking if "ha-230158-m03" exists ...
	I0804 00:37:57.599143   25370 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:37:57.599183   25370 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:37:57.612804   25370 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43097
	I0804 00:37:57.613143   25370 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:37:57.613629   25370 main.go:141] libmachine: Using API Version  1
	I0804 00:37:57.613647   25370 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:37:57.613967   25370 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:37:57.614155   25370 main.go:141] libmachine: (ha-230158-m03) Calling .GetIP
	I0804 00:37:57.616595   25370 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:37:57.617028   25370 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
	I0804 00:37:57.617055   25370 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:37:57.617202   25370 host.go:66] Checking if "ha-230158-m03" exists ...
	I0804 00:37:57.617515   25370 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:37:57.617553   25370 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:37:57.631257   25370 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34403
	I0804 00:37:57.631573   25370 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:37:57.631987   25370 main.go:141] libmachine: Using API Version  1
	I0804 00:37:57.632009   25370 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:37:57.632310   25370 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:37:57.632458   25370 main.go:141] libmachine: (ha-230158-m03) Calling .DriverName
	I0804 00:37:57.632626   25370 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 00:37:57.632646   25370 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHHostname
	I0804 00:37:57.635245   25370 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:37:57.635611   25370 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
	I0804 00:37:57.635626   25370 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
	I0804 00:37:57.635874   25370 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHPort
	I0804 00:37:57.636040   25370 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHKeyPath
	I0804 00:37:57.636209   25370 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHUsername
	I0804 00:37:57.636420   25370 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m03/id_rsa Username:docker}
	I0804 00:37:57.714245   25370 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:37:57.730416   25370 kubeconfig.go:125] found "ha-230158" server: "https://192.168.39.254:8443"
	I0804 00:37:57.730453   25370 api_server.go:166] Checking apiserver status ...
	I0804 00:37:57.730492   25370 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:37:57.743974   25370 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1971/cgroup
	W0804 00:37:57.753243   25370 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1971/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0804 00:37:57.753288   25370 ssh_runner.go:195] Run: ls
	I0804 00:37:57.757590   25370 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0804 00:37:57.761669   25370 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0804 00:37:57.761687   25370 status.go:422] ha-230158-m03 apiserver status = Running (err=<nil>)
	I0804 00:37:57.761695   25370 status.go:257] ha-230158-m03 status: &{Name:ha-230158-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 00:37:57.761707   25370 status.go:255] checking status of ha-230158-m04 ...
	I0804 00:37:57.762028   25370 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:37:57.762059   25370 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:37:57.776262   25370 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35911
	I0804 00:37:57.776699   25370 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:37:57.777151   25370 main.go:141] libmachine: Using API Version  1
	I0804 00:37:57.777170   25370 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:37:57.777461   25370 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:37:57.777649   25370 main.go:141] libmachine: (ha-230158-m04) Calling .GetState
	I0804 00:37:57.779314   25370 status.go:330] ha-230158-m04 host status = "Running" (err=<nil>)
	I0804 00:37:57.779331   25370 host.go:66] Checking if "ha-230158-m04" exists ...
	I0804 00:37:57.779609   25370 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:37:57.779638   25370 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:37:57.793689   25370 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46617
	I0804 00:37:57.794132   25370 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:37:57.794693   25370 main.go:141] libmachine: Using API Version  1
	I0804 00:37:57.794717   25370 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:37:57.795012   25370 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:37:57.795194   25370 main.go:141] libmachine: (ha-230158-m04) Calling .GetIP
	I0804 00:37:57.798068   25370 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
	I0804 00:37:57.798589   25370 main.go:141] libmachine: (ha-230158-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:b8:2e", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:36:42 +0000 UTC Type:0 Mac:52:54:00:eb:b8:2e Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-230158-m04 Clientid:01:52:54:00:eb:b8:2e}
	I0804 00:37:57.798622   25370 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined IP address 192.168.39.165 and MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
	I0804 00:37:57.798765   25370 host.go:66] Checking if "ha-230158-m04" exists ...
	I0804 00:37:57.799038   25370 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:37:57.799071   25370 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:37:57.812899   25370 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43377
	I0804 00:37:57.813392   25370 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:37:57.813777   25370 main.go:141] libmachine: Using API Version  1
	I0804 00:37:57.813796   25370 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:37:57.814082   25370 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:37:57.814255   25370 main.go:141] libmachine: (ha-230158-m04) Calling .DriverName
	I0804 00:37:57.814441   25370 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 00:37:57.814461   25370 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHHostname
	I0804 00:37:57.817041   25370 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
	I0804 00:37:57.817466   25370 main.go:141] libmachine: (ha-230158-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:b8:2e", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:36:42 +0000 UTC Type:0 Mac:52:54:00:eb:b8:2e Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-230158-m04 Clientid:01:52:54:00:eb:b8:2e}
	I0804 00:37:57.817490   25370 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined IP address 192.168.39.165 and MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
	I0804 00:37:57.817616   25370 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHPort
	I0804 00:37:57.817751   25370 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHKeyPath
	I0804 00:37:57.817857   25370 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHUsername
	I0804 00:37:57.817936   25370 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m04/id_rsa Username:docker}
	I0804 00:37:57.893933   25370 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:37:57.908950   25370 status.go:257] ha-230158-m04 status: &{Name:ha-230158-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (286.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-230158 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-230158 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-230158 -v=7 --alsologtostderr: (1m13.871725159s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-230158 --wait=true -v=7 --alsologtostderr
E0804 00:41:39.911474   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/functional-168863/client.crt: no such file or directory
E0804 00:42:07.594170   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/functional-168863/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-230158 --wait=true -v=7 --alsologtostderr: (3m32.622720366s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-230158
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (286.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (8.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 node delete m03 -v=7 --alsologtostderr
E0804 00:45:06.989860   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/addons-044946/client.crt: no such file or directory
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-230158 node delete m03 -v=7 --alsologtostderr: (7.388978765s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (8.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (38.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-230158 stop -v=7 --alsologtostderr: (38.29627334s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-230158 status -v=7 --alsologtostderr: exit status 7 (98.652058ms)

                                                
                                                
-- stdout --
	ha-230158
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-230158-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-230158-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0804 00:45:50.310253   28967 out.go:291] Setting OutFile to fd 1 ...
	I0804 00:45:50.310363   28967 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:45:50.310371   28967 out.go:304] Setting ErrFile to fd 2...
	I0804 00:45:50.310375   28967 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:45:50.310544   28967 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-3947/.minikube/bin
	I0804 00:45:50.310696   28967 out.go:298] Setting JSON to false
	I0804 00:45:50.310716   28967 mustload.go:65] Loading cluster: ha-230158
	I0804 00:45:50.310755   28967 notify.go:220] Checking for updates...
	I0804 00:45:50.311070   28967 config.go:182] Loaded profile config "ha-230158": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0804 00:45:50.311083   28967 status.go:255] checking status of ha-230158 ...
	I0804 00:45:50.311446   28967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:45:50.311501   28967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:45:50.331646   28967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38767
	I0804 00:45:50.332036   28967 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:45:50.332590   28967 main.go:141] libmachine: Using API Version  1
	I0804 00:45:50.332614   28967 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:45:50.332947   28967 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:45:50.333199   28967 main.go:141] libmachine: (ha-230158) Calling .GetState
	I0804 00:45:50.334858   28967 status.go:330] ha-230158 host status = "Stopped" (err=<nil>)
	I0804 00:45:50.334871   28967 status.go:343] host is not running, skipping remaining checks
	I0804 00:45:50.334877   28967 status.go:257] ha-230158 status: &{Name:ha-230158 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 00:45:50.334929   28967 status.go:255] checking status of ha-230158-m02 ...
	I0804 00:45:50.335222   28967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:45:50.335267   28967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:45:50.349437   28967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40903
	I0804 00:45:50.349844   28967 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:45:50.350279   28967 main.go:141] libmachine: Using API Version  1
	I0804 00:45:50.350322   28967 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:45:50.350686   28967 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:45:50.350840   28967 main.go:141] libmachine: (ha-230158-m02) Calling .GetState
	I0804 00:45:50.352381   28967 status.go:330] ha-230158-m02 host status = "Stopped" (err=<nil>)
	I0804 00:45:50.352397   28967 status.go:343] host is not running, skipping remaining checks
	I0804 00:45:50.352405   28967 status.go:257] ha-230158-m02 status: &{Name:ha-230158-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 00:45:50.352442   28967 status.go:255] checking status of ha-230158-m04 ...
	I0804 00:45:50.352728   28967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:45:50.352760   28967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:45:50.366771   28967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44617
	I0804 00:45:50.367098   28967 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:45:50.367491   28967 main.go:141] libmachine: Using API Version  1
	I0804 00:45:50.367510   28967 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:45:50.367772   28967 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:45:50.367944   28967 main.go:141] libmachine: (ha-230158-m04) Calling .GetState
	I0804 00:45:50.369410   28967 status.go:330] ha-230158-m04 host status = "Stopped" (err=<nil>)
	I0804 00:45:50.369431   28967 status.go:343] host is not running, skipping remaining checks
	I0804 00:45:50.369438   28967 status.go:257] ha-230158-m04 status: &{Name:ha-230158-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (38.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (160.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-230158 --wait=true -v=7 --alsologtostderr --driver=kvm2 
E0804 00:46:30.036938   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/addons-044946/client.crt: no such file or directory
E0804 00:46:39.912067   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/functional-168863/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-230158 --wait=true -v=7 --alsologtostderr --driver=kvm2 : (2m39.905554583s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (160.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (84.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-230158 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-230158 --control-plane -v=7 --alsologtostderr: (1m23.229245204s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-230158 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (84.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.55s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (51.05s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-878814 --driver=kvm2 
E0804 00:50:06.990345   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/addons-044946/client.crt: no such file or directory
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-878814 --driver=kvm2 : (51.047161661s)
--- PASS: TestImageBuild/serial/Setup (51.05s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.01s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-878814
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-878814: (2.013397665s)
--- PASS: TestImageBuild/serial/NormalBuild (2.01s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.04s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-878814
image_test.go:99: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-878814: (1.043791411s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.04s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.75s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-878814
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.75s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.78s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-878814
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.78s)

                                                
                                    
x
+
TestJSONOutput/start/Command (65.39s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-509169 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 
E0804 00:51:39.911808   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/functional-168863/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-509169 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 : (1m5.392811383s)
--- PASS: TestJSONOutput/start/Command (65.39s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.58s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-509169 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.58s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.55s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-509169 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.55s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (9.35s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-509169 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-509169 --output=json --user=testUser: (9.34911773s)
--- PASS: TestJSONOutput/stop/Command (9.35s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-000831 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-000831 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (57.58528ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e9a1b020-6190-45af-af08-4f82d8827d93","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-000831] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"928dea7e-bb83-48f0-ac6e-37d03f7465da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19364"}}
	{"specversion":"1.0","id":"5f09c812-87c6-4643-8d75-4b223a1d882c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3aa7e940-6f03-4357-a128-b338fd3beb7b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19364-3947/kubeconfig"}}
	{"specversion":"1.0","id":"4a89bfda-3e9e-40d7-9b68-25f255bdafc1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-3947/.minikube"}}
	{"specversion":"1.0","id":"572b22ea-2ba7-42db-a80f-6c9ba3ceb838","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"49f44e11-fe2a-471b-a9b5-240fa3369e32","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ef63b91f-e335-45d0-bb9a-e30ed71f08df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-000831" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-000831
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (103.15s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-153821 --driver=kvm2 
E0804 00:53:02.954402   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/functional-168863/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-153821 --driver=kvm2 : (50.029346599s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-157345 --driver=kvm2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-157345 --driver=kvm2 : (50.529614962s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-153821
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-157345
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-157345" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-157345
helpers_test.go:175: Cleaning up "first-153821" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-153821
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-153821: (1.001495857s)
--- PASS: TestMinikubeProfile (103.15s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (32.09s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-762976 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-762976 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 : (31.090836918s)
--- PASS: TestMountStart/serial/StartWithMountFirst (32.09s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-762976 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-762976 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (30.91s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-776314 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-776314 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 : (29.913426039s)
--- PASS: TestMountStart/serial/StartWithMountSecond (30.91s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-776314 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-776314 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-762976 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-776314 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-776314 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-776314
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-776314: (2.268630504s)
--- PASS: TestMountStart/serial/Stop (2.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (26.22s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-776314
E0804 00:55:06.990470   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/addons-044946/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-776314: (25.215549033s)
--- PASS: TestMountStart/serial/RestartStopped (26.22s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-776314 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-776314 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (137.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-185994 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 
E0804 00:56:39.912225   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/functional-168863/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-185994 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 : (2m17.602331403s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185994 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (137.99s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-185994 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-185994 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-185994 -- rollout status deployment/busybox: (2.68686039s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-185994 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-185994 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-185994 -- exec busybox-fc5497c4f-f8w2s -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-185994 -- exec busybox-fc5497c4f-lg9hp -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-185994 -- exec busybox-fc5497c4f-f8w2s -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-185994 -- exec busybox-fc5497c4f-lg9hp -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-185994 -- exec busybox-fc5497c4f-f8w2s -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-185994 -- exec busybox-fc5497c4f-lg9hp -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.23s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-185994 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-185994 -- exec busybox-fc5497c4f-f8w2s -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-185994 -- exec busybox-fc5497c4f-f8w2s -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-185994 -- exec busybox-fc5497c4f-lg9hp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-185994 -- exec busybox-fc5497c4f-lg9hp -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (57.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-185994 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-185994 -v 3 --alsologtostderr: (57.032186196s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185994 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (57.58s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-185994 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.20s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185994 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185994 cp testdata/cp-test.txt multinode-185994:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185994 ssh -n multinode-185994 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185994 cp multinode-185994:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile216061912/001/cp-test_multinode-185994.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185994 ssh -n multinode-185994 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185994 cp multinode-185994:/home/docker/cp-test.txt multinode-185994-m02:/home/docker/cp-test_multinode-185994_multinode-185994-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185994 ssh -n multinode-185994 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185994 ssh -n multinode-185994-m02 "sudo cat /home/docker/cp-test_multinode-185994_multinode-185994-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185994 cp multinode-185994:/home/docker/cp-test.txt multinode-185994-m03:/home/docker/cp-test_multinode-185994_multinode-185994-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185994 ssh -n multinode-185994 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185994 ssh -n multinode-185994-m03 "sudo cat /home/docker/cp-test_multinode-185994_multinode-185994-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185994 cp testdata/cp-test.txt multinode-185994-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185994 ssh -n multinode-185994-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185994 cp multinode-185994-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile216061912/001/cp-test_multinode-185994-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185994 ssh -n multinode-185994-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185994 cp multinode-185994-m02:/home/docker/cp-test.txt multinode-185994:/home/docker/cp-test_multinode-185994-m02_multinode-185994.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185994 ssh -n multinode-185994-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185994 ssh -n multinode-185994 "sudo cat /home/docker/cp-test_multinode-185994-m02_multinode-185994.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185994 cp multinode-185994-m02:/home/docker/cp-test.txt multinode-185994-m03:/home/docker/cp-test_multinode-185994-m02_multinode-185994-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185994 ssh -n multinode-185994-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185994 ssh -n multinode-185994-m03 "sudo cat /home/docker/cp-test_multinode-185994-m02_multinode-185994-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185994 cp testdata/cp-test.txt multinode-185994-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185994 ssh -n multinode-185994-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185994 cp multinode-185994-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile216061912/001/cp-test_multinode-185994-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185994 ssh -n multinode-185994-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185994 cp multinode-185994-m03:/home/docker/cp-test.txt multinode-185994:/home/docker/cp-test_multinode-185994-m03_multinode-185994.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185994 ssh -n multinode-185994-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185994 ssh -n multinode-185994 "sudo cat /home/docker/cp-test_multinode-185994-m03_multinode-185994.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185994 cp multinode-185994-m03:/home/docker/cp-test.txt multinode-185994-m02:/home/docker/cp-test_multinode-185994-m03_multinode-185994-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185994 ssh -n multinode-185994-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185994 ssh -n multinode-185994-m02 "sudo cat /home/docker/cp-test_multinode-185994-m03_multinode-185994-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.90s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185994 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-185994 node stop m03: (2.541242691s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185994 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-185994 status: exit status 7 (403.773474ms)

                                                
                                                
-- stdout --
	multinode-185994
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-185994-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-185994-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185994 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-185994 status --alsologtostderr: exit status 7 (401.079734ms)

                                                
                                                
-- stdout --
	multinode-185994
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-185994-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-185994-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0804 00:59:03.268811   37511 out.go:291] Setting OutFile to fd 1 ...
	I0804 00:59:03.269196   37511 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:59:03.269212   37511 out.go:304] Setting ErrFile to fd 2...
	I0804 00:59:03.269218   37511 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:59:03.269427   37511 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-3947/.minikube/bin
	I0804 00:59:03.269617   37511 out.go:298] Setting JSON to false
	I0804 00:59:03.269643   37511 mustload.go:65] Loading cluster: multinode-185994
	I0804 00:59:03.269735   37511 notify.go:220] Checking for updates...
	I0804 00:59:03.270098   37511 config.go:182] Loaded profile config "multinode-185994": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0804 00:59:03.270115   37511 status.go:255] checking status of multinode-185994 ...
	I0804 00:59:03.270612   37511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:59:03.270667   37511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:59:03.290316   37511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38765
	I0804 00:59:03.290779   37511 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:59:03.291366   37511 main.go:141] libmachine: Using API Version  1
	I0804 00:59:03.291390   37511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:59:03.291713   37511 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:59:03.291877   37511 main.go:141] libmachine: (multinode-185994) Calling .GetState
	I0804 00:59:03.293387   37511 status.go:330] multinode-185994 host status = "Running" (err=<nil>)
	I0804 00:59:03.293405   37511 host.go:66] Checking if "multinode-185994" exists ...
	I0804 00:59:03.293795   37511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:59:03.293844   37511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:59:03.309042   37511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36321
	I0804 00:59:03.309425   37511 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:59:03.309826   37511 main.go:141] libmachine: Using API Version  1
	I0804 00:59:03.309845   37511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:59:03.310107   37511 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:59:03.310291   37511 main.go:141] libmachine: (multinode-185994) Calling .GetIP
	I0804 00:59:03.312800   37511 main.go:141] libmachine: (multinode-185994) DBG | domain multinode-185994 has defined MAC address 52:54:00:ff:e2:5e in network mk-multinode-185994
	I0804 00:59:03.313217   37511 main.go:141] libmachine: (multinode-185994) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:e2:5e", ip: ""} in network mk-multinode-185994: {Iface:virbr1 ExpiryTime:2024-08-04 01:55:47 +0000 UTC Type:0 Mac:52:54:00:ff:e2:5e Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-185994 Clientid:01:52:54:00:ff:e2:5e}
	I0804 00:59:03.313247   37511 main.go:141] libmachine: (multinode-185994) DBG | domain multinode-185994 has defined IP address 192.168.39.3 and MAC address 52:54:00:ff:e2:5e in network mk-multinode-185994
	I0804 00:59:03.313382   37511 host.go:66] Checking if "multinode-185994" exists ...
	I0804 00:59:03.313640   37511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:59:03.313675   37511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:59:03.327727   37511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41137
	I0804 00:59:03.328105   37511 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:59:03.328479   37511 main.go:141] libmachine: Using API Version  1
	I0804 00:59:03.328494   37511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:59:03.328810   37511 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:59:03.328979   37511 main.go:141] libmachine: (multinode-185994) Calling .DriverName
	I0804 00:59:03.329157   37511 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 00:59:03.329175   37511 main.go:141] libmachine: (multinode-185994) Calling .GetSSHHostname
	I0804 00:59:03.331864   37511 main.go:141] libmachine: (multinode-185994) DBG | domain multinode-185994 has defined MAC address 52:54:00:ff:e2:5e in network mk-multinode-185994
	I0804 00:59:03.332225   37511 main.go:141] libmachine: (multinode-185994) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:e2:5e", ip: ""} in network mk-multinode-185994: {Iface:virbr1 ExpiryTime:2024-08-04 01:55:47 +0000 UTC Type:0 Mac:52:54:00:ff:e2:5e Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-185994 Clientid:01:52:54:00:ff:e2:5e}
	I0804 00:59:03.332251   37511 main.go:141] libmachine: (multinode-185994) DBG | domain multinode-185994 has defined IP address 192.168.39.3 and MAC address 52:54:00:ff:e2:5e in network mk-multinode-185994
	I0804 00:59:03.332387   37511 main.go:141] libmachine: (multinode-185994) Calling .GetSSHPort
	I0804 00:59:03.332566   37511 main.go:141] libmachine: (multinode-185994) Calling .GetSSHKeyPath
	I0804 00:59:03.332711   37511 main.go:141] libmachine: (multinode-185994) Calling .GetSSHUsername
	I0804 00:59:03.332864   37511 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/multinode-185994/id_rsa Username:docker}
	I0804 00:59:03.409719   37511 ssh_runner.go:195] Run: systemctl --version
	I0804 00:59:03.415831   37511 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:59:03.430710   37511 kubeconfig.go:125] found "multinode-185994" server: "https://192.168.39.3:8443"
	I0804 00:59:03.430744   37511 api_server.go:166] Checking apiserver status ...
	I0804 00:59:03.430774   37511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:59:03.444664   37511 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1855/cgroup
	W0804 00:59:03.454339   37511 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1855/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0804 00:59:03.454383   37511 ssh_runner.go:195] Run: ls
	I0804 00:59:03.458426   37511 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8443/healthz ...
	I0804 00:59:03.462294   37511 api_server.go:279] https://192.168.39.3:8443/healthz returned 200:
	ok
	I0804 00:59:03.462314   37511 status.go:422] multinode-185994 apiserver status = Running (err=<nil>)
	I0804 00:59:03.462327   37511 status.go:257] multinode-185994 status: &{Name:multinode-185994 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 00:59:03.462364   37511 status.go:255] checking status of multinode-185994-m02 ...
	I0804 00:59:03.462653   37511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:59:03.462697   37511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:59:03.477578   37511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42441
	I0804 00:59:03.477985   37511 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:59:03.478549   37511 main.go:141] libmachine: Using API Version  1
	I0804 00:59:03.478574   37511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:59:03.478871   37511 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:59:03.479070   37511 main.go:141] libmachine: (multinode-185994-m02) Calling .GetState
	I0804 00:59:03.480567   37511 status.go:330] multinode-185994-m02 host status = "Running" (err=<nil>)
	I0804 00:59:03.480580   37511 host.go:66] Checking if "multinode-185994-m02" exists ...
	I0804 00:59:03.480835   37511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:59:03.480863   37511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:59:03.495634   37511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42727
	I0804 00:59:03.495966   37511 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:59:03.496358   37511 main.go:141] libmachine: Using API Version  1
	I0804 00:59:03.496377   37511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:59:03.496678   37511 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:59:03.496846   37511 main.go:141] libmachine: (multinode-185994-m02) Calling .GetIP
	I0804 00:59:03.499529   37511 main.go:141] libmachine: (multinode-185994-m02) DBG | domain multinode-185994-m02 has defined MAC address 52:54:00:88:21:c5 in network mk-multinode-185994
	I0804 00:59:03.499934   37511 main.go:141] libmachine: (multinode-185994-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:21:c5", ip: ""} in network mk-multinode-185994: {Iface:virbr1 ExpiryTime:2024-08-04 01:57:10 +0000 UTC Type:0 Mac:52:54:00:88:21:c5 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-185994-m02 Clientid:01:52:54:00:88:21:c5}
	I0804 00:59:03.499967   37511 main.go:141] libmachine: (multinode-185994-m02) DBG | domain multinode-185994-m02 has defined IP address 192.168.39.168 and MAC address 52:54:00:88:21:c5 in network mk-multinode-185994
	I0804 00:59:03.500082   37511 host.go:66] Checking if "multinode-185994-m02" exists ...
	I0804 00:59:03.500412   37511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:59:03.500459   37511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:59:03.514117   37511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33669
	I0804 00:59:03.514557   37511 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:59:03.514991   37511 main.go:141] libmachine: Using API Version  1
	I0804 00:59:03.515012   37511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:59:03.515357   37511 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:59:03.515522   37511 main.go:141] libmachine: (multinode-185994-m02) Calling .DriverName
	I0804 00:59:03.515689   37511 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 00:59:03.515710   37511 main.go:141] libmachine: (multinode-185994-m02) Calling .GetSSHHostname
	I0804 00:59:03.518051   37511 main.go:141] libmachine: (multinode-185994-m02) DBG | domain multinode-185994-m02 has defined MAC address 52:54:00:88:21:c5 in network mk-multinode-185994
	I0804 00:59:03.518487   37511 main.go:141] libmachine: (multinode-185994-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:21:c5", ip: ""} in network mk-multinode-185994: {Iface:virbr1 ExpiryTime:2024-08-04 01:57:10 +0000 UTC Type:0 Mac:52:54:00:88:21:c5 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-185994-m02 Clientid:01:52:54:00:88:21:c5}
	I0804 00:59:03.518513   37511 main.go:141] libmachine: (multinode-185994-m02) DBG | domain multinode-185994-m02 has defined IP address 192.168.39.168 and MAC address 52:54:00:88:21:c5 in network mk-multinode-185994
	I0804 00:59:03.518648   37511 main.go:141] libmachine: (multinode-185994-m02) Calling .GetSSHPort
	I0804 00:59:03.518808   37511 main.go:141] libmachine: (multinode-185994-m02) Calling .GetSSHKeyPath
	I0804 00:59:03.519018   37511 main.go:141] libmachine: (multinode-185994-m02) Calling .GetSSHUsername
	I0804 00:59:03.519136   37511 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/multinode-185994-m02/id_rsa Username:docker}
	I0804 00:59:03.597185   37511 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:59:03.611655   37511 status.go:257] multinode-185994-m02 status: &{Name:multinode-185994-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0804 00:59:03.611709   37511 status.go:255] checking status of multinode-185994-m03 ...
	I0804 00:59:03.612124   37511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 00:59:03.612168   37511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:59:03.626951   37511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43181
	I0804 00:59:03.627413   37511 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:59:03.627898   37511 main.go:141] libmachine: Using API Version  1
	I0804 00:59:03.627920   37511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:59:03.628290   37511 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:59:03.628447   37511 main.go:141] libmachine: (multinode-185994-m03) Calling .GetState
	I0804 00:59:03.629926   37511 status.go:330] multinode-185994-m03 host status = "Stopped" (err=<nil>)
	I0804 00:59:03.629940   37511 status.go:343] host is not running, skipping remaining checks
	I0804 00:59:03.629948   37511 status.go:257] multinode-185994-m03 status: &{Name:multinode-185994-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.35s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (42.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185994 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-185994 node start m03 -v=7 --alsologtostderr: (41.835507178s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185994 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (42.43s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (191.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-185994
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-185994
E0804 01:00:06.990100   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/addons-044946/client.crt: no such file or directory
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-185994: (28.070575232s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-185994 --wait=true -v=8 --alsologtostderr
E0804 01:01:39.912874   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/functional-168863/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-185994 --wait=true -v=8 --alsologtostderr: (2m43.44516297s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-185994
--- PASS: TestMultiNode/serial/RestartKeepsNodes (191.60s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185994 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-185994 node delete m03: (1.664066372s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185994 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.18s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (25.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185994 stop
E0804 01:03:10.037993   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/addons-044946/client.crt: no such file or directory
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-185994 stop: (24.933234837s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185994 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-185994 status: exit status 7 (80.964935ms)

                                                
                                                
-- stdout --
	multinode-185994
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-185994-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185994 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-185994 status --alsologtostderr: exit status 7 (77.286608ms)

                                                
                                                
-- stdout --
	multinode-185994
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-185994-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0804 01:03:24.895091   39296 out.go:291] Setting OutFile to fd 1 ...
	I0804 01:03:24.895211   39296 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 01:03:24.895221   39296 out.go:304] Setting ErrFile to fd 2...
	I0804 01:03:24.895224   39296 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 01:03:24.895436   39296 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-3947/.minikube/bin
	I0804 01:03:24.895586   39296 out.go:298] Setting JSON to false
	I0804 01:03:24.895607   39296 mustload.go:65] Loading cluster: multinode-185994
	I0804 01:03:24.895723   39296 notify.go:220] Checking for updates...
	I0804 01:03:24.896066   39296 config.go:182] Loaded profile config "multinode-185994": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0804 01:03:24.896085   39296 status.go:255] checking status of multinode-185994 ...
	I0804 01:03:24.896571   39296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 01:03:24.896615   39296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:03:24.914571   39296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32971
	I0804 01:03:24.914958   39296 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:03:24.915419   39296 main.go:141] libmachine: Using API Version  1
	I0804 01:03:24.915434   39296 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:03:24.915750   39296 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:03:24.915923   39296 main.go:141] libmachine: (multinode-185994) Calling .GetState
	I0804 01:03:24.917388   39296 status.go:330] multinode-185994 host status = "Stopped" (err=<nil>)
	I0804 01:03:24.917400   39296 status.go:343] host is not running, skipping remaining checks
	I0804 01:03:24.917405   39296 status.go:257] multinode-185994 status: &{Name:multinode-185994 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 01:03:24.917423   39296 status.go:255] checking status of multinode-185994-m02 ...
	I0804 01:03:24.917703   39296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0804 01:03:24.917733   39296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:03:24.931820   39296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33623
	I0804 01:03:24.932200   39296 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:03:24.932628   39296 main.go:141] libmachine: Using API Version  1
	I0804 01:03:24.932656   39296 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:03:24.932965   39296 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:03:24.933134   39296 main.go:141] libmachine: (multinode-185994-m02) Calling .GetState
	I0804 01:03:24.934603   39296 status.go:330] multinode-185994-m02 host status = "Stopped" (err=<nil>)
	I0804 01:03:24.934616   39296 status.go:343] host is not running, skipping remaining checks
	I0804 01:03:24.934621   39296 status.go:257] multinode-185994-m02 status: &{Name:multinode-185994-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (25.09s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (116.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-185994 --wait=true -v=8 --alsologtostderr --driver=kvm2 
E0804 01:05:06.990614   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/addons-044946/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-185994 --wait=true -v=8 --alsologtostderr --driver=kvm2 : (1m55.560534982s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-185994 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (116.07s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (52.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-185994
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-185994-m02 --driver=kvm2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-185994-m02 --driver=kvm2 : exit status 14 (56.914039ms)

                                                
                                                
-- stdout --
	* [multinode-185994-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19364
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19364-3947/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-3947/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-185994-m02' is duplicated with machine name 'multinode-185994-m02' in profile 'multinode-185994'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-185994-m03 --driver=kvm2 
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-185994-m03 --driver=kvm2 : (50.841222988s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-185994
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-185994: exit status 80 (208.068193ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-185994 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-185994-m03 already exists in multinode-185994-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-185994-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (52.09s)

                                                
                                    
x
+
TestPreload (193.54s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-798524 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4
E0804 01:06:39.912021   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/functional-168863/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-798524 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4: (2m2.044838591s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-798524 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-798524 image pull gcr.io/k8s-minikube/busybox: (1.211987839s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-798524
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-798524: (12.520971035s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-798524 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-798524 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 : (56.707406874s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-798524 image list
helpers_test.go:175: Cleaning up "test-preload-798524" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-798524
--- PASS: TestPreload (193.54s)

                                                
                                    
x
+
TestScheduledStopUnix (122.64s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-549024 --memory=2048 --driver=kvm2 
E0804 01:09:42.955477   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/functional-168863/client.crt: no such file or directory
E0804 01:10:06.990089   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/addons-044946/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-549024 --memory=2048 --driver=kvm2 : (51.066527198s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-549024 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-549024 -n scheduled-stop-549024
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-549024 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-549024 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-549024 -n scheduled-stop-549024
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-549024
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-549024 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-549024
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-549024: exit status 7 (61.311417ms)

                                                
                                                
-- stdout --
	scheduled-stop-549024
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-549024 -n scheduled-stop-549024
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-549024 -n scheduled-stop-549024: exit status 7 (63.789104ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-549024" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-549024
--- PASS: TestScheduledStopUnix (122.64s)

                                                
                                    
x
+
TestSkaffold (134.39s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe3018129511 version
skaffold_test.go:63: skaffold version: v2.13.1
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-042002 --memory=2600 --driver=kvm2 
E0804 01:11:39.913395   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/functional-168863/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-042002 --memory=2600 --driver=kvm2 : (53.413054347s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe3018129511 run --minikube-profile skaffold-042002 --kube-context skaffold-042002 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe3018129511 run --minikube-profile skaffold-042002 --kube-context skaffold-042002 --status-check=true --port-forward=false --interactive=false: (1m8.131660442s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-5f5df6548c-nvfhb" [00f66f3d-0842-471b-82cf-f2569b3ffb7d] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.004578381s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-7c96d4cc4-gjlhc" [78ffbe40-b8b8-4ea0-a9d4-fda3b4d916ce] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.00335021s
helpers_test.go:175: Cleaning up "skaffold-042002" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-042002
--- PASS: TestSkaffold (134.39s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (146.22s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3204765737 start -p running-upgrade-572176 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3204765737 start -p running-upgrade-572176 --memory=2200 --vm-driver=kvm2 : (1m19.569474716s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-572176 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-572176 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m5.188038765s)
helpers_test.go:175: Cleaning up "running-upgrade-572176" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-572176
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-572176: (1.005587493s)
--- PASS: TestRunningBinaryUpgrade (146.22s)

                                                
                                    
x
+
TestKubernetesUpgrade (184.38s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-279798 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2 
E0804 01:16:39.912139   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/functional-168863/client.crt: no such file or directory
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-279798 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2 : (1m8.250581095s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-279798
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-279798: (12.816532743s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-279798 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-279798 status --format={{.Host}}: exit status 7 (84.857252ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-279798 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-279798 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=kvm2 : (1m7.368321681s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-279798 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-279798 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-279798 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2 : exit status 106 (83.489153ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-279798] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19364
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19364-3947/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-3947/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-rc.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-279798
	    minikube start -p kubernetes-upgrade-279798 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2797982 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-279798 --kubernetes-version=v1.31.0-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-279798 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-279798 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=kvm2 : (34.438918541s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-279798" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-279798
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-279798: (1.280828436s)
--- PASS: TestKubernetesUpgrade (184.38s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.43s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.43s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (125.46s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1310576526 start -p stopped-upgrade-191363 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1310576526 start -p stopped-upgrade-191363 --memory=2200 --vm-driver=kvm2 : (1m4.847015488s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1310576526 -p stopped-upgrade-191363 stop
E0804 01:18:53.705358   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/skaffold-042002/client.crt: no such file or directory
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1310576526 -p stopped-upgrade-191363 stop: (12.517951482s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-191363 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-191363 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (48.091571836s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (125.46s)

                                                
                                    
x
+
TestPause/serial/Start (113.62s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-424991 --memory=2048 --install-addons=false --wait=all --driver=kvm2 
E0804 01:18:33.224290   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/skaffold-042002/client.crt: no such file or directory
E0804 01:18:33.229639   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/skaffold-042002/client.crt: no such file or directory
E0804 01:18:33.239893   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/skaffold-042002/client.crt: no such file or directory
E0804 01:18:33.260154   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/skaffold-042002/client.crt: no such file or directory
E0804 01:18:33.300476   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/skaffold-042002/client.crt: no such file or directory
E0804 01:18:33.380821   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/skaffold-042002/client.crt: no such file or directory
E0804 01:18:33.541199   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/skaffold-042002/client.crt: no such file or directory
E0804 01:18:33.861792   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/skaffold-042002/client.crt: no such file or directory
E0804 01:18:34.502214   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/skaffold-042002/client.crt: no such file or directory
E0804 01:18:35.782636   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/skaffold-042002/client.crt: no such file or directory
E0804 01:18:38.343465   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/skaffold-042002/client.crt: no such file or directory
E0804 01:18:43.464584   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/skaffold-042002/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-424991 --memory=2048 --install-addons=false --wait=all --driver=kvm2 : (1m53.615723876s)
--- PASS: TestPause/serial/Start (113.62s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.12s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-191363
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-191363: (1.116558334s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-946829 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-946829 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 : exit status 14 (71.188302ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-946829] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19364
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19364-3947/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-3947/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (69.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-946829 --driver=kvm2 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-946829 --driver=kvm2 : (1m9.031865808s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-946829 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (69.31s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (80.3s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-424991 --alsologtostderr -v=1 --driver=kvm2 
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-424991 --alsologtostderr -v=1 --driver=kvm2 : (1m20.278333362s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (80.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (134.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-643335 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 
E0804 01:20:51.734592   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/gvisor-169607/client.crt: no such file or directory
E0804 01:20:51.739896   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/gvisor-169607/client.crt: no such file or directory
E0804 01:20:51.750213   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/gvisor-169607/client.crt: no such file or directory
E0804 01:20:51.770634   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/gvisor-169607/client.crt: no such file or directory
E0804 01:20:51.810965   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/gvisor-169607/client.crt: no such file or directory
E0804 01:20:51.891295   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/gvisor-169607/client.crt: no such file or directory
E0804 01:20:52.051722   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/gvisor-169607/client.crt: no such file or directory
E0804 01:20:52.372295   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/gvisor-169607/client.crt: no such file or directory
E0804 01:20:53.013213   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/gvisor-169607/client.crt: no such file or directory
E0804 01:20:54.294354   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/gvisor-169607/client.crt: no such file or directory
E0804 01:20:56.854676   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/gvisor-169607/client.crt: no such file or directory
E0804 01:21:01.978402   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/gvisor-169607/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-643335 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 : (2m14.994534567s)
--- PASS: TestNetworkPlugins/group/auto/Start (134.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (43.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-946829 --no-kubernetes --driver=kvm2 
E0804 01:21:12.218987   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/gvisor-169607/client.crt: no such file or directory
E0804 01:21:17.067369   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/skaffold-042002/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-946829 --no-kubernetes --driver=kvm2 : (41.517852898s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-946829 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-946829 status -o json: exit status 2 (245.204018ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-946829","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-946829
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-946829: (1.392896759s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (43.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (83.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-643335 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 
E0804 01:21:39.911812   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/functional-168863/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-643335 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 : (1m23.148405695s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (83.15s)

                                                
                                    
x
+
TestPause/serial/Pause (0.61s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-424991 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.61s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.25s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-424991 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-424991 --output=json --layout=cluster: exit status 2 (250.92282ms)

                                                
                                                
-- stdout --
	{"Name":"pause-424991","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-424991","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.25s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.53s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-424991 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.53s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.82s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-424991 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.82s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.01s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-424991 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-424991 --alsologtostderr -v=5: (1.00651605s)
--- PASS: TestPause/serial/DeletePaused (1.01s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.64s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (44.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-946829 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-946829 --no-kubernetes --driver=kvm2 : (44.497588s)
--- PASS: TestNoKubernetes/serial/Start (44.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (139.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-643335 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 
E0804 01:22:13.659662   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/gvisor-169607/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-643335 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 : (2m19.350062189s)
--- PASS: TestNetworkPlugins/group/calico/Start (139.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-946829 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-946829 "sudo systemctl is-active --quiet service kubelet": exit status 1 (193.357593ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-946829
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-946829: (2.355589103s)
--- PASS: TestNoKubernetes/serial/Stop (2.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (47.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-946829 --driver=kvm2 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-946829 --driver=kvm2 : (47.053711339s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (47.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-643335 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-643335 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-bpxf5" [797fb6d5-9fcf-4427-84f0-a730eaf71783] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-bpxf5" [797fb6d5-9fcf-4427-84f0-a730eaf71783] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.005216903s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-tqhmj" [af6b35be-d0b2-4d73-be26-c8bca1888cb1] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.006221798s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-643335 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-643335 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-643335 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-643335 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-643335 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-6z9r4" [bd70c32c-c486-413f-a115-2cdd9d8d923e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-6z9r4" [bd70c32c-c486-413f-a115-2cdd9d8d923e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.004949633s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (90.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-643335 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-643335 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 : (1m30.808749189s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (90.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-643335 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-643335 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-643335 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-946829 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-946829 "sudo systemctl is-active --quiet service kubelet": exit status 1 (214.833577ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (100.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-643335 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-643335 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 : (1m40.386631234s)
--- PASS: TestNetworkPlugins/group/false/Start (100.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (117.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-643335 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 
E0804 01:23:33.224630   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/skaffold-042002/client.crt: no such file or directory
E0804 01:23:35.580778   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/gvisor-169607/client.crt: no such file or directory
E0804 01:24:00.908105   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/skaffold-042002/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-643335 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 : (1m57.033382508s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (117.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-tjvsb" [add90103-016d-43da-b8e1-d37f2bbbca1e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006932822s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-643335 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-643335 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-59sdh" [79950400-378a-4382-9378-51f9a5a3149c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-59sdh" [79950400-378a-4382-9378-51f9a5a3149c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004462798s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-643335 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-643335 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-643335 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (88.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-643335 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-643335 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 : (1m28.148411807s)
--- PASS: TestNetworkPlugins/group/flannel/Start (88.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-643335 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-643335 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-67nlg" [419a71eb-32db-4a0d-a2e3-539b51d88183] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-67nlg" [419a71eb-32db-4a0d-a2e3-539b51d88183] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.004213136s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-643335 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-643335 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-643335 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-643335 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-643335 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-js2lq" [f4d5e2d8-5434-4127-9fee-25003b53a1b2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0804 01:25:06.990328   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/addons-044946/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-js2lq" [f4d5e2d8-5434-4127-9fee-25003b53a1b2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.004703106s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-643335 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-643335 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-643335 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (108s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-643335 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-643335 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 : (1m47.997237583s)
--- PASS: TestNetworkPlugins/group/bridge/Start (108.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-643335 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-643335 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-k2s64" [0371532f-1bff-4e8a-badc-56e1cc6d8b91] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-k2s64" [0371532f-1bff-4e8a-badc-56e1cc6d8b91] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.005487609s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (122.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-643335 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-643335 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 : (2m2.657808502s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (122.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-643335 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-643335 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-643335 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (159.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-122039 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-122039 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0: (2m39.91740787s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (159.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-dlg7m" [dbb5273a-945b-4bba-91c3-8785f1a60bab] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005077176s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-643335 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-643335 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-7srkl" [7db850d5-a36a-431e-b7e8-b32308c4f093] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0804 01:26:19.421222   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/gvisor-169607/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-7srkl" [7db850d5-a36a-431e-b7e8-b32308c4f093] Running
E0804 01:26:22.955994   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/functional-168863/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.003529144s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-643335 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-643335 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-643335 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (84.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-514997 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.0-rc.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-514997 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.0-rc.0: (1m24.524021402s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (84.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-643335 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-643335 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-7c565" [0f4cf2fa-ab2e-4322-a94d-a1a9f050bdc2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-7c565" [0f4cf2fa-ab2e-4322-a94d-a1a9f050bdc2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.003430898s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-643335 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-643335 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-643335 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (108.75s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-167055 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-167055 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.30.3: (1m48.754642445s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (108.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-643335 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (11.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-643335 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-rs2rc" [f7384854-898b-4e6e-aa67-ba20fb24843c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-rs2rc" [f7384854-898b-4e6e-aa67-ba20fb24843c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 11.005340351s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (11.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-643335 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-643335 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-643335 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (77.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-790166 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.30.3
E0804 01:28:01.869216   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/kindnet-643335/client.crt: no such file or directory
E0804 01:28:06.990188   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/kindnet-643335/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-790166 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.30.3: (1m17.822309744s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (77.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-514997 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d5b7283f-b8ec-473c-9a64-017e11b6ff36] Pending
helpers_test.go:344: "busybox" [d5b7283f-b8ec-473c-9a64-017e11b6ff36] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0804 01:28:09.173368   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/auto-643335/client.crt: no such file or directory
helpers_test.go:344: "busybox" [d5b7283f-b8ec-473c-9a64-017e11b6ff36] Running
E0804 01:28:17.231379   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/kindnet-643335/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004403539s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-514997 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-514997 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-514997 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (13.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-514997 --alsologtostderr -v=3
E0804 01:28:29.653837   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/auto-643335/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-514997 --alsologtostderr -v=3: (13.345511614s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (13.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-514997 -n no-preload-514997
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-514997 -n no-preload-514997: exit status 7 (74.80529ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-514997 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (362.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-514997 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.0-rc.0
E0804 01:28:33.224878   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/skaffold-042002/client.crt: no such file or directory
E0804 01:28:37.711561   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/kindnet-643335/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-514997 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.0-rc.0: (6m1.871322134s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-514997 -n no-preload-514997
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (362.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-122039 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [da94a6aa-ae67-4f41-acd5-68f85e192d38] Pending
helpers_test.go:344: "busybox" [da94a6aa-ae67-4f41-acd5-68f85e192d38] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [da94a6aa-ae67-4f41-acd5-68f85e192d38] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004172047s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-122039 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.60s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-122039 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-122039 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-122039 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-122039 --alsologtostderr -v=3: (13.374518038s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-122039 -n old-k8s-version-122039
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-122039 -n old-k8s-version-122039: exit status 7 (60.814494ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-122039 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (400.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-122039 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0
E0804 01:29:04.957568   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/calico-643335/client.crt: no such file or directory
E0804 01:29:04.962860   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/calico-643335/client.crt: no such file or directory
E0804 01:29:04.973113   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/calico-643335/client.crt: no such file or directory
E0804 01:29:04.993386   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/calico-643335/client.crt: no such file or directory
E0804 01:29:05.033628   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/calico-643335/client.crt: no such file or directory
E0804 01:29:05.113959   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/calico-643335/client.crt: no such file or directory
E0804 01:29:05.274154   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/calico-643335/client.crt: no such file or directory
E0804 01:29:05.594705   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/calico-643335/client.crt: no such file or directory
E0804 01:29:06.235847   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/calico-643335/client.crt: no such file or directory
E0804 01:29:07.516373   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/calico-643335/client.crt: no such file or directory
E0804 01:29:10.077133   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/calico-643335/client.crt: no such file or directory
E0804 01:29:10.614043   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/auto-643335/client.crt: no such file or directory
E0804 01:29:15.197780   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/calico-643335/client.crt: no such file or directory
E0804 01:29:18.671927   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/kindnet-643335/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-122039 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0: (6m40.57511189s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-122039 -n old-k8s-version-122039
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (400.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-790166 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [59cd9830-ae05-46f7-b11c-e1ae7a46667e] Pending
helpers_test.go:344: "busybox" [59cd9830-ae05-46f7-b11c-e1ae7a46667e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [59cd9830-ae05-46f7-b11c-e1ae7a46667e] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004453357s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-790166 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-167055 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [19249100-d106-447b-a034-f2f714021ec6] Pending
helpers_test.go:344: "busybox" [19249100-d106-447b-a034-f2f714021ec6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0804 01:29:25.438350   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/calico-643335/client.crt: no such file or directory
helpers_test.go:344: "busybox" [19249100-d106-447b-a034-f2f714021ec6] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.005511815s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-167055 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-790166 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-790166 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-790166 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-790166 --alsologtostderr -v=3: (13.379702663s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-167055 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-167055 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.06110966s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-167055 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-167055 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-167055 --alsologtostderr -v=3: (13.36217727s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-790166 -n default-k8s-diff-port-790166
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-790166 -n default-k8s-diff-port-790166: exit status 7 (63.594152ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-790166 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (325.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-790166 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-790166 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.30.3: (5m24.784276606s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-790166 -n default-k8s-diff-port-790166
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (325.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-167055 -n embed-certs-167055
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-167055 -n embed-certs-167055: exit status 7 (64.184923ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-167055 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (342.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-167055 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.30.3
E0804 01:29:45.919456   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/calico-643335/client.crt: no such file or directory
E0804 01:29:47.071854   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/custom-flannel-643335/client.crt: no such file or directory
E0804 01:29:47.077137   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/custom-flannel-643335/client.crt: no such file or directory
E0804 01:29:47.087413   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/custom-flannel-643335/client.crt: no such file or directory
E0804 01:29:47.107709   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/custom-flannel-643335/client.crt: no such file or directory
E0804 01:29:47.148037   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/custom-flannel-643335/client.crt: no such file or directory
E0804 01:29:47.228384   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/custom-flannel-643335/client.crt: no such file or directory
E0804 01:29:47.389454   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/custom-flannel-643335/client.crt: no such file or directory
E0804 01:29:47.710432   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/custom-flannel-643335/client.crt: no such file or directory
E0804 01:29:48.350587   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/custom-flannel-643335/client.crt: no such file or directory
E0804 01:29:49.631724   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/custom-flannel-643335/client.crt: no such file or directory
E0804 01:29:52.192879   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/custom-flannel-643335/client.crt: no such file or directory
E0804 01:29:57.313167   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/custom-flannel-643335/client.crt: no such file or directory
E0804 01:30:02.635538   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/false-643335/client.crt: no such file or directory
E0804 01:30:02.640833   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/false-643335/client.crt: no such file or directory
E0804 01:30:02.651079   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/false-643335/client.crt: no such file or directory
E0804 01:30:02.671349   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/false-643335/client.crt: no such file or directory
E0804 01:30:02.711655   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/false-643335/client.crt: no such file or directory
E0804 01:30:02.791994   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/false-643335/client.crt: no such file or directory
E0804 01:30:02.952404   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/false-643335/client.crt: no such file or directory
E0804 01:30:03.273080   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/false-643335/client.crt: no such file or directory
E0804 01:30:03.914084   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/false-643335/client.crt: no such file or directory
E0804 01:30:05.194626   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/false-643335/client.crt: no such file or directory
E0804 01:30:06.990750   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/addons-044946/client.crt: no such file or directory
E0804 01:30:07.554061   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/custom-flannel-643335/client.crt: no such file or directory
E0804 01:30:07.755529   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/false-643335/client.crt: no such file or directory
E0804 01:30:12.876378   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/false-643335/client.crt: no such file or directory
E0804 01:30:23.117043   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/false-643335/client.crt: no such file or directory
E0804 01:30:26.880453   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/calico-643335/client.crt: no such file or directory
E0804 01:30:28.034709   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/custom-flannel-643335/client.crt: no such file or directory
E0804 01:30:30.456970   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/enable-default-cni-643335/client.crt: no such file or directory
E0804 01:30:30.462282   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/enable-default-cni-643335/client.crt: no such file or directory
E0804 01:30:30.472555   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/enable-default-cni-643335/client.crt: no such file or directory
E0804 01:30:30.492779   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/enable-default-cni-643335/client.crt: no such file or directory
E0804 01:30:30.533049   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/enable-default-cni-643335/client.crt: no such file or directory
E0804 01:30:30.613375   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/enable-default-cni-643335/client.crt: no such file or directory
E0804 01:30:30.773681   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/enable-default-cni-643335/client.crt: no such file or directory
E0804 01:30:31.094279   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/enable-default-cni-643335/client.crt: no such file or directory
E0804 01:30:31.734575   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/enable-default-cni-643335/client.crt: no such file or directory
E0804 01:30:32.534228   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/auto-643335/client.crt: no such file or directory
E0804 01:30:33.015102   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/enable-default-cni-643335/client.crt: no such file or directory
E0804 01:30:35.575928   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/enable-default-cni-643335/client.crt: no such file or directory
E0804 01:30:40.592999   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/kindnet-643335/client.crt: no such file or directory
E0804 01:30:40.696193   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/enable-default-cni-643335/client.crt: no such file or directory
E0804 01:30:43.597676   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/false-643335/client.crt: no such file or directory
E0804 01:30:50.936912   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/enable-default-cni-643335/client.crt: no such file or directory
E0804 01:30:51.734349   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/gvisor-169607/client.crt: no such file or directory
E0804 01:31:08.333824   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/flannel-643335/client.crt: no such file or directory
E0804 01:31:08.339143   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/flannel-643335/client.crt: no such file or directory
E0804 01:31:08.349705   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/flannel-643335/client.crt: no such file or directory
E0804 01:31:08.369989   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/flannel-643335/client.crt: no such file or directory
E0804 01:31:08.410318   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/flannel-643335/client.crt: no such file or directory
E0804 01:31:08.490598   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/flannel-643335/client.crt: no such file or directory
E0804 01:31:08.651041   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/flannel-643335/client.crt: no such file or directory
E0804 01:31:08.971405   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/flannel-643335/client.crt: no such file or directory
E0804 01:31:08.994847   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/custom-flannel-643335/client.crt: no such file or directory
E0804 01:31:09.612546   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/flannel-643335/client.crt: no such file or directory
E0804 01:31:10.893022   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/flannel-643335/client.crt: no such file or directory
E0804 01:31:11.417043   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/enable-default-cni-643335/client.crt: no such file or directory
E0804 01:31:13.453785   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/flannel-643335/client.crt: no such file or directory
E0804 01:31:18.574765   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/flannel-643335/client.crt: no such file or directory
E0804 01:31:24.558619   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/false-643335/client.crt: no such file or directory
E0804 01:31:28.815473   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/flannel-643335/client.crt: no such file or directory
E0804 01:31:39.911221   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/functional-168863/client.crt: no such file or directory
E0804 01:31:48.801404   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/calico-643335/client.crt: no such file or directory
E0804 01:31:49.296590   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/flannel-643335/client.crt: no such file or directory
E0804 01:31:52.378047   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/enable-default-cni-643335/client.crt: no such file or directory
E0804 01:32:05.840143   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/bridge-643335/client.crt: no such file or directory
E0804 01:32:05.845445   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/bridge-643335/client.crt: no such file or directory
E0804 01:32:05.855707   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/bridge-643335/client.crt: no such file or directory
E0804 01:32:05.875996   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/bridge-643335/client.crt: no such file or directory
E0804 01:32:05.916239   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/bridge-643335/client.crt: no such file or directory
E0804 01:32:05.996557   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/bridge-643335/client.crt: no such file or directory
E0804 01:32:06.156949   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/bridge-643335/client.crt: no such file or directory
E0804 01:32:06.477611   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/bridge-643335/client.crt: no such file or directory
E0804 01:32:07.118128   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/bridge-643335/client.crt: no such file or directory
E0804 01:32:08.398696   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/bridge-643335/client.crt: no such file or directory
E0804 01:32:10.959090   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/bridge-643335/client.crt: no such file or directory
E0804 01:32:16.080060   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/bridge-643335/client.crt: no such file or directory
E0804 01:32:26.320326   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/bridge-643335/client.crt: no such file or directory
E0804 01:32:30.257628   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/flannel-643335/client.crt: no such file or directory
E0804 01:32:30.915831   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/custom-flannel-643335/client.crt: no such file or directory
E0804 01:32:34.543125   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/kubenet-643335/client.crt: no such file or directory
E0804 01:32:34.548453   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/kubenet-643335/client.crt: no such file or directory
E0804 01:32:34.558800   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/kubenet-643335/client.crt: no such file or directory
E0804 01:32:34.579034   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/kubenet-643335/client.crt: no such file or directory
E0804 01:32:34.619285   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/kubenet-643335/client.crt: no such file or directory
E0804 01:32:34.699592   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/kubenet-643335/client.crt: no such file or directory
E0804 01:32:34.860020   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/kubenet-643335/client.crt: no such file or directory
E0804 01:32:35.180889   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/kubenet-643335/client.crt: no such file or directory
E0804 01:32:35.821039   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/kubenet-643335/client.crt: no such file or directory
E0804 01:32:37.101529   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/kubenet-643335/client.crt: no such file or directory
E0804 01:32:39.662587   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/kubenet-643335/client.crt: no such file or directory
E0804 01:32:44.783075   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/kubenet-643335/client.crt: no such file or directory
E0804 01:32:46.479671   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/false-643335/client.crt: no such file or directory
E0804 01:32:46.801038   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/bridge-643335/client.crt: no such file or directory
E0804 01:32:48.693081   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/auto-643335/client.crt: no such file or directory
E0804 01:32:55.024157   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/kubenet-643335/client.crt: no such file or directory
E0804 01:32:56.747173   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/kindnet-643335/client.crt: no such file or directory
E0804 01:33:14.298374   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/enable-default-cni-643335/client.crt: no such file or directory
E0804 01:33:15.505044   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/kubenet-643335/client.crt: no such file or directory
E0804 01:33:16.374384   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/auto-643335/client.crt: no such file or directory
E0804 01:33:24.434102   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/kindnet-643335/client.crt: no such file or directory
E0804 01:33:27.762070   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/bridge-643335/client.crt: no such file or directory
E0804 01:33:33.224921   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/skaffold-042002/client.crt: no such file or directory
E0804 01:33:52.177981   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/flannel-643335/client.crt: no such file or directory
E0804 01:33:56.465560   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/kubenet-643335/client.crt: no such file or directory
E0804 01:34:04.957861   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/calico-643335/client.crt: no such file or directory
E0804 01:34:32.642550   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/calico-643335/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-167055 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.30.3: (5m41.93517935s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-167055 -n embed-certs-167055
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (342.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-4g6ps" [1a307ac7-23d5-4d4b-90ff-ad0d2ac6f39a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004359736s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-4g6ps" [1a307ac7-23d5-4d4b-90ff-ad0d2ac6f39a] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004299608s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-514997 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.72s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-514997 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.72s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-514997 --alsologtostderr -v=1
E0804 01:34:47.072098   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/custom-flannel-643335/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-514997 -n no-preload-514997
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-514997 -n no-preload-514997: exit status 2 (238.692372ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-514997 -n no-preload-514997
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-514997 -n no-preload-514997: exit status 2 (232.347494ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-514997 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-514997 -n no-preload-514997
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-514997 -n no-preload-514997
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.45s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (62.99s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-171124 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.0-rc.0
E0804 01:34:56.268305   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/skaffold-042002/client.crt: no such file or directory
E0804 01:35:02.635360   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/false-643335/client.crt: no such file or directory
E0804 01:35:06.990148   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/addons-044946/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-171124 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.0-rc.0: (1m2.993724084s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (62.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-h6g82" [a0f05f12-f8bc-4f14-aab4-7ae23f5bdb69] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.011620116s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-h6g82" [a0f05f12-f8bc-4f14-aab4-7ae23f5bdb69] Running
E0804 01:35:14.756465   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/custom-flannel-643335/client.crt: no such file or directory
E0804 01:35:18.385676   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/kubenet-643335/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008654618s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-790166 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-790166 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-790166 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-790166 -n default-k8s-diff-port-790166
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-790166 -n default-k8s-diff-port-790166: exit status 2 (236.42733ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-790166 -n default-k8s-diff-port-790166
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-790166 -n default-k8s-diff-port-790166: exit status 2 (237.534166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-790166 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-790166 -n default-k8s-diff-port-790166
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-790166 -n default-k8s-diff-port-790166
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-4x4vh" [56ddbc3c-cb5b-4e90-b82f-33bf0bbc2418] Running
E0804 01:35:30.320261   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/false-643335/client.crt: no such file or directory
E0804 01:35:30.456121   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/enable-default-cni-643335/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005082488s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-4x4vh" [56ddbc3c-cb5b-4e90-b82f-33bf0bbc2418] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005840615s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-167055 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-167055 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.57s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-167055 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-167055 -n embed-certs-167055
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-167055 -n embed-certs-167055: exit status 2 (235.957053ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-167055 -n embed-certs-167055
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-167055 -n embed-certs-167055: exit status 2 (240.367045ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-167055 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-167055 -n embed-certs-167055
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-167055 -n embed-certs-167055
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-wk829" [4f46bb5b-76a0-4daa-9110-d58c7aeb3b77] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004238596s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-wk829" [4f46bb5b-76a0-4daa-9110-d58c7aeb3b77] Running
E0804 01:35:51.734780   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/gvisor-169607/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004170231s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-122039 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.81s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-171124 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.81s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (12.62s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-171124 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-171124 --alsologtostderr -v=3: (12.624365204s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (12.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-122039 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-122039 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-122039 -n old-k8s-version-122039
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-122039 -n old-k8s-version-122039: exit status 2 (231.803525ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-122039 -n old-k8s-version-122039
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-122039 -n old-k8s-version-122039: exit status 2 (229.510104ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-122039 --alsologtostderr -v=1
E0804 01:35:58.139444   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/enable-default-cni-643335/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-122039 -n old-k8s-version-122039
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-122039 -n old-k8s-version-122039
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-171124 -n newest-cni-171124
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-171124 -n newest-cni-171124: exit status 7 (62.333524ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-171124 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (38.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-171124 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.0-rc.0
E0804 01:36:08.333440   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/flannel-643335/client.crt: no such file or directory
E0804 01:36:30.038969   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/addons-044946/client.crt: no such file or directory
E0804 01:36:36.018577   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/flannel-643335/client.crt: no such file or directory
E0804 01:36:39.911239   11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/functional-168863/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-171124 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.0-rc.0: (38.168997846s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-171124 -n newest-cni-171124
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (38.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-171124 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-171124 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-171124 -n newest-cni-171124
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-171124 -n newest-cni-171124: exit status 2 (226.811095ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-171124 -n newest-cni-171124
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-171124 -n newest-cni-171124: exit status 2 (222.610412ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-171124 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-171124 -n newest-cni-171124
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-171124 -n newest-cni-171124
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.10s)

                                                
                                    

Test skip (34/349)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.30.3/cached-images 0
15 TestDownloadOnly/v1.30.3/binaries 0
16 TestDownloadOnly/v1.30.3/kubectl 0
23 TestDownloadOnly/v1.31.0-rc.0/cached-images 0
24 TestDownloadOnly/v1.31.0-rc.0/binaries 0
25 TestDownloadOnly/v1.31.0-rc.0/kubectl 0
29 TestDownloadOnlyKic 0
47 TestAddons/parallel/Olm 0
60 TestDockerEnvContainerd 0
62 TestHyperKitDriverInstallOrUpdate 0
63 TestHyperkitDriverSkipUpgrade 0
115 TestFunctional/parallel/PodmanEnv 0
134 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
135 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
136 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
137 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
138 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
139 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
140 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
141 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
193 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
220 TestKicCustomNetwork 0
221 TestKicExistingNetwork 0
222 TestKicCustomSubnet 0
223 TestKicStaticIP 0
255 TestChangeNoneUser 0
258 TestScheduledStopWindows 0
262 TestInsufficientStorage 0
266 TestMissingContainerUpgrade 0
277 TestNetworkPlugins/group/cilium 3.47
287 TestStartStop/group/disable-driver-mounts 0.16
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-643335 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-643335

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-643335

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-643335

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-643335

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-643335

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-643335

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-643335

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-643335

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-643335

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-643335

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-643335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643335"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-643335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643335"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-643335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643335"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-643335

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-643335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643335"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-643335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643335"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-643335" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-643335" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-643335" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-643335" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-643335" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-643335" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-643335" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-643335" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-643335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643335"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-643335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643335"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-643335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643335"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-643335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643335"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-643335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643335"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-643335

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-643335

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-643335" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-643335" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-643335

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-643335

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-643335" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-643335" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-643335" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-643335" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-643335" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-643335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643335"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-643335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643335"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-643335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643335"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-643335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643335"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-643335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643335"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-643335

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-643335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643335"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-643335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643335"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-643335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643335"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-643335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643335"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-643335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643335"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-643335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643335"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-643335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643335"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-643335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643335"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-643335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643335"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-643335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643335"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-643335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643335"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-643335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643335"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-643335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643335"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-643335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643335"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-643335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643335"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-643335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643335"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-643335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643335"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-643335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-643335"

                                                
                                                
----------------------- debugLogs end: cilium-643335 [took: 3.328436655s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-643335" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-643335
--- SKIP: TestNetworkPlugins/group/cilium (3.47s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-179695" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-179695
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
Copied to clipboard