Test Report: KVM_Linux 17145

                    
                      18848273edc5eb926291da53102e5aefa8069f6f:2023-08-30:30788
                    
                

Test fail (1/317)

Order failed test Duration
213 TestMultiNode/serial/StartAfterStop 20.66
x
+
TestMultiNode/serial/StartAfterStop (20.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-944570 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-944570 node start m03 --alsologtostderr: exit status 90 (18.048886857s)

                                                
                                                
-- stdout --
	* Starting worker node multinode-944570-m03 in cluster multinode-944570
	* Restarting existing kvm2 VM for "multinode-944570-m03" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0830 20:28:06.573562  244307 out.go:296] Setting OutFile to fd 1 ...
	I0830 20:28:06.573729  244307 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 20:28:06.573739  244307 out.go:309] Setting ErrFile to fd 2...
	I0830 20:28:06.573743  244307 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 20:28:06.573939  244307 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17145-222139/.minikube/bin
	I0830 20:28:06.574189  244307 mustload.go:65] Loading cluster: multinode-944570
	I0830 20:28:06.575608  244307 config.go:182] Loaded profile config "multinode-944570": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0830 20:28:06.576245  244307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0830 20:28:06.576305  244307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 20:28:06.591317  244307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35525
	I0830 20:28:06.591776  244307 main.go:141] libmachine: () Calling .GetVersion
	I0830 20:28:06.592489  244307 main.go:141] libmachine: Using API Version  1
	I0830 20:28:06.592514  244307 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 20:28:06.592877  244307 main.go:141] libmachine: () Calling .GetMachineName
	I0830 20:28:06.593092  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetState
	W0830 20:28:06.594543  244307 host.go:58] "multinode-944570-m03" host status: Stopped
	I0830 20:28:06.596855  244307 out.go:177] * Starting worker node multinode-944570-m03 in cluster multinode-944570
	I0830 20:28:06.598212  244307 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0830 20:28:06.598256  244307 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17145-222139/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-amd64.tar.lz4
	I0830 20:28:06.598266  244307 cache.go:57] Caching tarball of preloaded images
	I0830 20:28:06.598363  244307 preload.go:174] Found /home/jenkins/minikube-integration/17145-222139/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0830 20:28:06.598375  244307 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0830 20:28:06.598508  244307 profile.go:148] Saving config to /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/config.json ...
	I0830 20:28:06.598707  244307 start.go:365] acquiring machines lock for multinode-944570-m03: {Name:mk9a092bb7d2f42c1b785aa1d546d37ad26cec77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0830 20:28:06.598757  244307 start.go:369] acquired machines lock for "multinode-944570-m03" in 23.357µs
	I0830 20:28:06.598772  244307 start.go:96] Skipping create...Using existing machine configuration
	I0830 20:28:06.598776  244307 fix.go:54] fixHost starting: m03
	I0830 20:28:06.599038  244307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0830 20:28:06.599071  244307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 20:28:06.614301  244307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33299
	I0830 20:28:06.614773  244307 main.go:141] libmachine: () Calling .GetVersion
	I0830 20:28:06.615382  244307 main.go:141] libmachine: Using API Version  1
	I0830 20:28:06.615403  244307 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 20:28:06.615702  244307 main.go:141] libmachine: () Calling .GetMachineName
	I0830 20:28:06.615920  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .DriverName
	I0830 20:28:06.616060  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetState
	I0830 20:28:06.617687  244307 fix.go:102] recreateIfNeeded on multinode-944570-m03: state=Stopped err=<nil>
	I0830 20:28:06.617717  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .DriverName
	W0830 20:28:06.617886  244307 fix.go:128] unexpected machine state, will restart: <nil>
	I0830 20:28:06.619821  244307 out.go:177] * Restarting existing kvm2 VM for "multinode-944570-m03" ...
	I0830 20:28:06.621392  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .Start
	I0830 20:28:06.621583  244307 main.go:141] libmachine: (multinode-944570-m03) Ensuring networks are active...
	I0830 20:28:06.622306  244307 main.go:141] libmachine: (multinode-944570-m03) Ensuring network default is active
	I0830 20:28:06.622618  244307 main.go:141] libmachine: (multinode-944570-m03) Ensuring network mk-multinode-944570 is active
	I0830 20:28:06.622938  244307 main.go:141] libmachine: (multinode-944570-m03) Getting domain xml...
	I0830 20:28:06.623618  244307 main.go:141] libmachine: (multinode-944570-m03) Creating domain...
	I0830 20:28:07.885177  244307 main.go:141] libmachine: (multinode-944570-m03) Waiting to get IP...
	I0830 20:28:07.886080  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined MAC address 52:54:00:21:38:ac in network mk-multinode-944570
	I0830 20:28:07.886564  244307 main.go:141] libmachine: (multinode-944570-m03) Found IP for machine: 192.168.39.83
	I0830 20:28:07.886598  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has current primary IP address 192.168.39.83 and MAC address 52:54:00:21:38:ac in network mk-multinode-944570
	I0830 20:28:07.886610  244307 main.go:141] libmachine: (multinode-944570-m03) Reserving static IP address...
	I0830 20:28:07.887023  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | found host DHCP lease matching {name: "multinode-944570-m03", mac: "52:54:00:21:38:ac", ip: "192.168.39.83"} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:27:24 +0000 UTC Type:0 Mac:52:54:00:21:38:ac Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-944570-m03 Clientid:01:52:54:00:21:38:ac}
	I0830 20:28:07.887056  244307 main.go:141] libmachine: (multinode-944570-m03) Reserved static IP address: 192.168.39.83
	I0830 20:28:07.887076  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | skip adding static IP to network mk-multinode-944570 - found existing host DHCP lease matching {name: "multinode-944570-m03", mac: "52:54:00:21:38:ac", ip: "192.168.39.83"}
	I0830 20:28:07.887096  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | Getting to WaitForSSH function...
	I0830 20:28:07.887114  244307 main.go:141] libmachine: (multinode-944570-m03) Waiting for SSH to be available...
	I0830 20:28:07.889355  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined MAC address 52:54:00:21:38:ac in network mk-multinode-944570
	I0830 20:28:07.889760  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:38:ac", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:27:24 +0000 UTC Type:0 Mac:52:54:00:21:38:ac Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-944570-m03 Clientid:01:52:54:00:21:38:ac}
	I0830 20:28:07.889805  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined IP address 192.168.39.83 and MAC address 52:54:00:21:38:ac in network mk-multinode-944570
	I0830 20:28:07.889875  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | Using SSH client type: external
	I0830 20:28:07.889913  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570-m03/id_rsa (-rw-------)
	I0830 20:28:07.889955  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.83 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0830 20:28:07.889971  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | About to run SSH command:
	I0830 20:28:07.889986  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | exit 0
	I0830 20:28:19.990768  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | SSH cmd err, output: <nil>: 
	I0830 20:28:19.991228  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetConfigRaw
	I0830 20:28:19.992027  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetIP
	I0830 20:28:19.994736  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined MAC address 52:54:00:21:38:ac in network mk-multinode-944570
	I0830 20:28:19.995178  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:38:ac", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:28:18 +0000 UTC Type:0 Mac:52:54:00:21:38:ac Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-944570-m03 Clientid:01:52:54:00:21:38:ac}
	I0830 20:28:19.995218  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined IP address 192.168.39.83 and MAC address 52:54:00:21:38:ac in network mk-multinode-944570
	I0830 20:28:19.995566  244307 profile.go:148] Saving config to /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/config.json ...
	I0830 20:28:19.995804  244307 machine.go:88] provisioning docker machine ...
	I0830 20:28:19.995826  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .DriverName
	I0830 20:28:19.996062  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetMachineName
	I0830 20:28:19.996233  244307 buildroot.go:166] provisioning hostname "multinode-944570-m03"
	I0830 20:28:19.996251  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetMachineName
	I0830 20:28:19.996393  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHHostname
	I0830 20:28:19.998799  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined MAC address 52:54:00:21:38:ac in network mk-multinode-944570
	I0830 20:28:19.999129  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:38:ac", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:28:18 +0000 UTC Type:0 Mac:52:54:00:21:38:ac Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-944570-m03 Clientid:01:52:54:00:21:38:ac}
	I0830 20:28:19.999158  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined IP address 192.168.39.83 and MAC address 52:54:00:21:38:ac in network mk-multinode-944570
	I0830 20:28:19.999322  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHPort
	I0830 20:28:19.999531  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHKeyPath
	I0830 20:28:19.999724  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHKeyPath
	I0830 20:28:19.999869  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHUsername
	I0830 20:28:20.000039  244307 main.go:141] libmachine: Using SSH client type: native
	I0830 20:28:20.000672  244307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.83 22 <nil> <nil>}
	I0830 20:28:20.000697  244307 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-944570-m03 && echo "multinode-944570-m03" | sudo tee /etc/hostname
	I0830 20:28:20.138639  244307 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-944570-m03
	
	I0830 20:28:20.138679  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHHostname
	I0830 20:28:20.141577  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined MAC address 52:54:00:21:38:ac in network mk-multinode-944570
	I0830 20:28:20.142086  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:38:ac", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:28:18 +0000 UTC Type:0 Mac:52:54:00:21:38:ac Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-944570-m03 Clientid:01:52:54:00:21:38:ac}
	I0830 20:28:20.142129  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined IP address 192.168.39.83 and MAC address 52:54:00:21:38:ac in network mk-multinode-944570
	I0830 20:28:20.142250  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHPort
	I0830 20:28:20.142466  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHKeyPath
	I0830 20:28:20.142639  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHKeyPath
	I0830 20:28:20.142749  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHUsername
	I0830 20:28:20.142907  244307 main.go:141] libmachine: Using SSH client type: native
	I0830 20:28:20.143334  244307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.83 22 <nil> <nil>}
	I0830 20:28:20.143352  244307 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-944570-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-944570-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-944570-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0830 20:28:20.266328  244307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 20:28:20.266356  244307 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17145-222139/.minikube CaCertPath:/home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17145-222139/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17145-222139/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17145-222139/.minikube}
	I0830 20:28:20.266393  244307 buildroot.go:174] setting up certificates
	I0830 20:28:20.266406  244307 provision.go:83] configureAuth start
	I0830 20:28:20.266420  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetMachineName
	I0830 20:28:20.266734  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetIP
	I0830 20:28:20.269497  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined MAC address 52:54:00:21:38:ac in network mk-multinode-944570
	I0830 20:28:20.269864  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:38:ac", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:28:18 +0000 UTC Type:0 Mac:52:54:00:21:38:ac Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-944570-m03 Clientid:01:52:54:00:21:38:ac}
	I0830 20:28:20.269904  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined IP address 192.168.39.83 and MAC address 52:54:00:21:38:ac in network mk-multinode-944570
	I0830 20:28:20.270090  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHHostname
	I0830 20:28:20.272135  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined MAC address 52:54:00:21:38:ac in network mk-multinode-944570
	I0830 20:28:20.272553  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:38:ac", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:28:18 +0000 UTC Type:0 Mac:52:54:00:21:38:ac Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-944570-m03 Clientid:01:52:54:00:21:38:ac}
	I0830 20:28:20.272582  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined IP address 192.168.39.83 and MAC address 52:54:00:21:38:ac in network mk-multinode-944570
	I0830 20:28:20.272708  244307 provision.go:138] copyHostCerts
	I0830 20:28:20.272767  244307 exec_runner.go:144] found /home/jenkins/minikube-integration/17145-222139/.minikube/cert.pem, removing ...
	I0830 20:28:20.272777  244307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17145-222139/.minikube/cert.pem
	I0830 20:28:20.272844  244307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17145-222139/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17145-222139/.minikube/cert.pem (1123 bytes)
	I0830 20:28:20.272966  244307 exec_runner.go:144] found /home/jenkins/minikube-integration/17145-222139/.minikube/key.pem, removing ...
	I0830 20:28:20.272976  244307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17145-222139/.minikube/key.pem
	I0830 20:28:20.273002  244307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17145-222139/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17145-222139/.minikube/key.pem (1675 bytes)
	I0830 20:28:20.273067  244307 exec_runner.go:144] found /home/jenkins/minikube-integration/17145-222139/.minikube/ca.pem, removing ...
	I0830 20:28:20.273074  244307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17145-222139/.minikube/ca.pem
	I0830 20:28:20.273094  244307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17145-222139/.minikube/ca.pem (1082 bytes)
	I0830 20:28:20.273172  244307 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17145-222139/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca-key.pem org=jenkins.multinode-944570-m03 san=[192.168.39.83 192.168.39.83 localhost 127.0.0.1 minikube multinode-944570-m03]
	I0830 20:28:20.393764  244307 provision.go:172] copyRemoteCerts
	I0830 20:28:20.393820  244307 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0830 20:28:20.393844  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHHostname
	I0830 20:28:20.396496  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined MAC address 52:54:00:21:38:ac in network mk-multinode-944570
	I0830 20:28:20.396831  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:38:ac", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:28:18 +0000 UTC Type:0 Mac:52:54:00:21:38:ac Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-944570-m03 Clientid:01:52:54:00:21:38:ac}
	I0830 20:28:20.396864  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined IP address 192.168.39.83 and MAC address 52:54:00:21:38:ac in network mk-multinode-944570
	I0830 20:28:20.397040  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHPort
	I0830 20:28:20.397257  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHKeyPath
	I0830 20:28:20.397412  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHUsername
	I0830 20:28:20.397568  244307 sshutil.go:53] new ssh client: &{IP:192.168.39.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570-m03/id_rsa Username:docker}
	I0830 20:28:20.484425  244307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0830 20:28:20.505011  244307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0830 20:28:20.525576  244307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0830 20:28:20.545797  244307 provision.go:86] duration metric: configureAuth took 279.365155ms
	I0830 20:28:20.545834  244307 buildroot.go:189] setting minikube options for container-runtime
	I0830 20:28:20.546069  244307 config.go:182] Loaded profile config "multinode-944570": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0830 20:28:20.546094  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .DriverName
	I0830 20:28:20.546398  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHHostname
	I0830 20:28:20.549013  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined MAC address 52:54:00:21:38:ac in network mk-multinode-944570
	I0830 20:28:20.549347  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:38:ac", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:28:18 +0000 UTC Type:0 Mac:52:54:00:21:38:ac Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-944570-m03 Clientid:01:52:54:00:21:38:ac}
	I0830 20:28:20.549377  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined IP address 192.168.39.83 and MAC address 52:54:00:21:38:ac in network mk-multinode-944570
	I0830 20:28:20.549558  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHPort
	I0830 20:28:20.549744  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHKeyPath
	I0830 20:28:20.549908  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHKeyPath
	I0830 20:28:20.550025  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHUsername
	I0830 20:28:20.550201  244307 main.go:141] libmachine: Using SSH client type: native
	I0830 20:28:20.550580  244307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.83 22 <nil> <nil>}
	I0830 20:28:20.550592  244307 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0830 20:28:20.669126  244307 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0830 20:28:20.669159  244307 buildroot.go:70] root file system type: tmpfs
	I0830 20:28:20.669312  244307 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0830 20:28:20.669338  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHHostname
	I0830 20:28:20.671868  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined MAC address 52:54:00:21:38:ac in network mk-multinode-944570
	I0830 20:28:20.672232  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:38:ac", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:28:18 +0000 UTC Type:0 Mac:52:54:00:21:38:ac Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-944570-m03 Clientid:01:52:54:00:21:38:ac}
	I0830 20:28:20.672257  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined IP address 192.168.39.83 and MAC address 52:54:00:21:38:ac in network mk-multinode-944570
	I0830 20:28:20.672449  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHPort
	I0830 20:28:20.672640  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHKeyPath
	I0830 20:28:20.672815  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHKeyPath
	I0830 20:28:20.672955  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHUsername
	I0830 20:28:20.673120  244307 main.go:141] libmachine: Using SSH client type: native
	I0830 20:28:20.673795  244307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.83 22 <nil> <nil>}
	I0830 20:28:20.673892  244307 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0830 20:28:20.799169  244307 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0830 20:28:20.799231  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHHostname
	I0830 20:28:20.802123  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined MAC address 52:54:00:21:38:ac in network mk-multinode-944570
	I0830 20:28:20.802501  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:38:ac", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:28:18 +0000 UTC Type:0 Mac:52:54:00:21:38:ac Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-944570-m03 Clientid:01:52:54:00:21:38:ac}
	I0830 20:28:20.802530  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined IP address 192.168.39.83 and MAC address 52:54:00:21:38:ac in network mk-multinode-944570
	I0830 20:28:20.802699  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHPort
	I0830 20:28:20.802869  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHKeyPath
	I0830 20:28:20.803010  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHKeyPath
	I0830 20:28:20.803149  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHUsername
	I0830 20:28:20.803394  244307 main.go:141] libmachine: Using SSH client type: native
	I0830 20:28:20.803892  244307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.83 22 <nil> <nil>}
	I0830 20:28:20.803918  244307 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0830 20:28:21.578444  244307 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0830 20:28:21.578469  244307 machine.go:91] provisioned docker machine in 1.582651123s
	I0830 20:28:21.578480  244307 start.go:300] post-start starting for "multinode-944570-m03" (driver="kvm2")
	I0830 20:28:21.578490  244307 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0830 20:28:21.578511  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .DriverName
	I0830 20:28:21.578900  244307 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0830 20:28:21.578942  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHHostname
	I0830 20:28:21.581578  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined MAC address 52:54:00:21:38:ac in network mk-multinode-944570
	I0830 20:28:21.581969  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:38:ac", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:28:18 +0000 UTC Type:0 Mac:52:54:00:21:38:ac Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-944570-m03 Clientid:01:52:54:00:21:38:ac}
	I0830 20:28:21.581997  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined IP address 192.168.39.83 and MAC address 52:54:00:21:38:ac in network mk-multinode-944570
	I0830 20:28:21.582131  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHPort
	I0830 20:28:21.582369  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHKeyPath
	I0830 20:28:21.582565  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHUsername
	I0830 20:28:21.582749  244307 sshutil.go:53] new ssh client: &{IP:192.168.39.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570-m03/id_rsa Username:docker}
	I0830 20:28:21.668786  244307 ssh_runner.go:195] Run: cat /etc/os-release
	I0830 20:28:21.672898  244307 info.go:137] Remote host: Buildroot 2021.02.12
	I0830 20:28:21.672928  244307 filesync.go:126] Scanning /home/jenkins/minikube-integration/17145-222139/.minikube/addons for local assets ...
	I0830 20:28:21.673010  244307 filesync.go:126] Scanning /home/jenkins/minikube-integration/17145-222139/.minikube/files for local assets ...
	I0830 20:28:21.673094  244307 filesync.go:149] local asset: /home/jenkins/minikube-integration/17145-222139/.minikube/files/etc/ssl/certs/2293472.pem -> 2293472.pem in /etc/ssl/certs
	I0830 20:28:21.673181  244307 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0830 20:28:21.682570  244307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/files/etc/ssl/certs/2293472.pem --> /etc/ssl/certs/2293472.pem (1708 bytes)
	I0830 20:28:21.702791  244307 start.go:303] post-start completed in 124.296018ms
	I0830 20:28:21.702818  244307 fix.go:56] fixHost completed within 15.104040753s
	I0830 20:28:21.702845  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHHostname
	I0830 20:28:21.705614  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined MAC address 52:54:00:21:38:ac in network mk-multinode-944570
	I0830 20:28:21.706051  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:38:ac", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:28:18 +0000 UTC Type:0 Mac:52:54:00:21:38:ac Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-944570-m03 Clientid:01:52:54:00:21:38:ac}
	I0830 20:28:21.706103  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined IP address 192.168.39.83 and MAC address 52:54:00:21:38:ac in network mk-multinode-944570
	I0830 20:28:21.706277  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHPort
	I0830 20:28:21.706472  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHKeyPath
	I0830 20:28:21.706649  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHKeyPath
	I0830 20:28:21.706796  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHUsername
	I0830 20:28:21.706949  244307 main.go:141] libmachine: Using SSH client type: native
	I0830 20:28:21.707369  244307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.83 22 <nil> <nil>}
	I0830 20:28:21.707382  244307 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0830 20:28:21.819938  244307 main.go:141] libmachine: SSH cmd err, output: <nil>: 1693427301.771008702
	
	I0830 20:28:21.819966  244307 fix.go:206] guest clock: 1693427301.771008702
	I0830 20:28:21.819973  244307 fix.go:219] Guest: 2023-08-30 20:28:21.771008702 +0000 UTC Remote: 2023-08-30 20:28:21.702822945 +0000 UTC m=+15.165600981 (delta=68.185757ms)
	I0830 20:28:21.819993  244307 fix.go:190] guest clock delta is within tolerance: 68.185757ms
	I0830 20:28:21.819998  244307 start.go:83] releasing machines lock for "multinode-944570-m03", held for 15.221231305s
	I0830 20:28:21.820019  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .DriverName
	I0830 20:28:21.820357  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetIP
	I0830 20:28:21.823024  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined MAC address 52:54:00:21:38:ac in network mk-multinode-944570
	I0830 20:28:21.823407  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:38:ac", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:28:18 +0000 UTC Type:0 Mac:52:54:00:21:38:ac Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-944570-m03 Clientid:01:52:54:00:21:38:ac}
	I0830 20:28:21.823431  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined IP address 192.168.39.83 and MAC address 52:54:00:21:38:ac in network mk-multinode-944570
	I0830 20:28:21.823638  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .DriverName
	I0830 20:28:21.824224  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .DriverName
	I0830 20:28:21.824406  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .DriverName
	I0830 20:28:21.824518  244307 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0830 20:28:21.824558  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHHostname
	I0830 20:28:21.824642  244307 ssh_runner.go:195] Run: systemctl --version
	I0830 20:28:21.824671  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHHostname
	I0830 20:28:21.827280  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined MAC address 52:54:00:21:38:ac in network mk-multinode-944570
	I0830 20:28:21.827583  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined MAC address 52:54:00:21:38:ac in network mk-multinode-944570
	I0830 20:28:21.827738  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:38:ac", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:28:18 +0000 UTC Type:0 Mac:52:54:00:21:38:ac Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-944570-m03 Clientid:01:52:54:00:21:38:ac}
	I0830 20:28:21.827775  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined IP address 192.168.39.83 and MAC address 52:54:00:21:38:ac in network mk-multinode-944570
	I0830 20:28:21.827921  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHPort
	I0830 20:28:21.828041  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:38:ac", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:28:18 +0000 UTC Type:0 Mac:52:54:00:21:38:ac Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-944570-m03 Clientid:01:52:54:00:21:38:ac}
	I0830 20:28:21.828081  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined IP address 192.168.39.83 and MAC address 52:54:00:21:38:ac in network mk-multinode-944570
	I0830 20:28:21.828087  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHKeyPath
	I0830 20:28:21.828217  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHPort
	I0830 20:28:21.828321  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHUsername
	I0830 20:28:21.828348  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHKeyPath
	I0830 20:28:21.828494  244307 sshutil.go:53] new ssh client: &{IP:192.168.39.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570-m03/id_rsa Username:docker}
	I0830 20:28:21.828560  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHUsername
	I0830 20:28:21.828679  244307 sshutil.go:53] new ssh client: &{IP:192.168.39.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570-m03/id_rsa Username:docker}
	I0830 20:28:21.961459  244307 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0830 20:28:21.966821  244307 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0830 20:28:21.966920  244307 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0830 20:28:21.981519  244307 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0830 20:28:21.981544  244307 start.go:466] detecting cgroup driver to use...
	I0830 20:28:21.981698  244307 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0830 20:28:21.998451  244307 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0830 20:28:22.007411  244307 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0830 20:28:22.016484  244307 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0830 20:28:22.016544  244307 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0830 20:28:22.025752  244307 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0830 20:28:22.034759  244307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0830 20:28:22.043923  244307 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0830 20:28:22.052964  244307 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0830 20:28:22.062283  244307 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0830 20:28:22.071333  244307 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0830 20:28:22.079597  244307 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0830 20:28:22.087564  244307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0830 20:28:22.189186  244307 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0830 20:28:22.206890  244307 start.go:466] detecting cgroup driver to use...
	I0830 20:28:22.206994  244307 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0830 20:28:22.220310  244307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0830 20:28:22.231888  244307 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0830 20:28:22.247009  244307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0830 20:28:22.258664  244307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0830 20:28:22.269656  244307 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0830 20:28:22.300955  244307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0830 20:28:22.312769  244307 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0830 20:28:22.329383  244307 ssh_runner.go:195] Run: which cri-dockerd
	I0830 20:28:22.332782  244307 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0830 20:28:22.340908  244307 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0830 20:28:22.354724  244307 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0830 20:28:22.470530  244307 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0830 20:28:22.573526  244307 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0830 20:28:22.573569  244307 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0830 20:28:22.590701  244307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0830 20:28:22.696373  244307 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0830 20:28:24.102132  244307 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.405720981s)
	I0830 20:28:24.102211  244307 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0830 20:28:24.213758  244307 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0830 20:28:24.331361  244307 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0830 20:28:24.437979  244307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0830 20:28:24.557719  244307 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0830 20:28:24.572334  244307 out.go:177] 
	W0830 20:28:24.573820  244307 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	W0830 20:28:24.573836  244307 out.go:239] * 
	* 
	W0830 20:28:24.576192  244307 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0830 20:28:24.577631  244307 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:256: I0830 20:28:06.573562  244307 out.go:296] Setting OutFile to fd 1 ...
I0830 20:28:06.573729  244307 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0830 20:28:06.573739  244307 out.go:309] Setting ErrFile to fd 2...
I0830 20:28:06.573743  244307 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0830 20:28:06.573939  244307 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17145-222139/.minikube/bin
I0830 20:28:06.574189  244307 mustload.go:65] Loading cluster: multinode-944570
I0830 20:28:06.575608  244307 config.go:182] Loaded profile config "multinode-944570": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0830 20:28:06.576245  244307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0830 20:28:06.576305  244307 main.go:141] libmachine: Launching plugin server for driver kvm2
I0830 20:28:06.591317  244307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35525
I0830 20:28:06.591776  244307 main.go:141] libmachine: () Calling .GetVersion
I0830 20:28:06.592489  244307 main.go:141] libmachine: Using API Version  1
I0830 20:28:06.592514  244307 main.go:141] libmachine: () Calling .SetConfigRaw
I0830 20:28:06.592877  244307 main.go:141] libmachine: () Calling .GetMachineName
I0830 20:28:06.593092  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetState
W0830 20:28:06.594543  244307 host.go:58] "multinode-944570-m03" host status: Stopped
I0830 20:28:06.596855  244307 out.go:177] * Starting worker node multinode-944570-m03 in cluster multinode-944570
I0830 20:28:06.598212  244307 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
I0830 20:28:06.598256  244307 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17145-222139/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-amd64.tar.lz4
I0830 20:28:06.598266  244307 cache.go:57] Caching tarball of preloaded images
I0830 20:28:06.598363  244307 preload.go:174] Found /home/jenkins/minikube-integration/17145-222139/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0830 20:28:06.598375  244307 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
I0830 20:28:06.598508  244307 profile.go:148] Saving config to /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/config.json ...
I0830 20:28:06.598707  244307 start.go:365] acquiring machines lock for multinode-944570-m03: {Name:mk9a092bb7d2f42c1b785aa1d546d37ad26cec77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0830 20:28:06.598757  244307 start.go:369] acquired machines lock for "multinode-944570-m03" in 23.357µs
I0830 20:28:06.598772  244307 start.go:96] Skipping create...Using existing machine configuration
I0830 20:28:06.598776  244307 fix.go:54] fixHost starting: m03
I0830 20:28:06.599038  244307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0830 20:28:06.599071  244307 main.go:141] libmachine: Launching plugin server for driver kvm2
I0830 20:28:06.614301  244307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33299
I0830 20:28:06.614773  244307 main.go:141] libmachine: () Calling .GetVersion
I0830 20:28:06.615382  244307 main.go:141] libmachine: Using API Version  1
I0830 20:28:06.615403  244307 main.go:141] libmachine: () Calling .SetConfigRaw
I0830 20:28:06.615702  244307 main.go:141] libmachine: () Calling .GetMachineName
I0830 20:28:06.615920  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .DriverName
I0830 20:28:06.616060  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetState
I0830 20:28:06.617687  244307 fix.go:102] recreateIfNeeded on multinode-944570-m03: state=Stopped err=<nil>
I0830 20:28:06.617717  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .DriverName
W0830 20:28:06.617886  244307 fix.go:128] unexpected machine state, will restart: <nil>
I0830 20:28:06.619821  244307 out.go:177] * Restarting existing kvm2 VM for "multinode-944570-m03" ...
I0830 20:28:06.621392  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .Start
I0830 20:28:06.621583  244307 main.go:141] libmachine: (multinode-944570-m03) Ensuring networks are active...
I0830 20:28:06.622306  244307 main.go:141] libmachine: (multinode-944570-m03) Ensuring network default is active
I0830 20:28:06.622618  244307 main.go:141] libmachine: (multinode-944570-m03) Ensuring network mk-multinode-944570 is active
I0830 20:28:06.622938  244307 main.go:141] libmachine: (multinode-944570-m03) Getting domain xml...
I0830 20:28:06.623618  244307 main.go:141] libmachine: (multinode-944570-m03) Creating domain...
I0830 20:28:07.885177  244307 main.go:141] libmachine: (multinode-944570-m03) Waiting to get IP...
I0830 20:28:07.886080  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:07.886564  244307 main.go:141] libmachine: (multinode-944570-m03) Found IP for machine: 192.168.39.83
I0830 20:28:07.886598  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has current primary IP address 192.168.39.83 and MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:07.886610  244307 main.go:141] libmachine: (multinode-944570-m03) Reserving static IP address...
I0830 20:28:07.887023  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | found host DHCP lease matching {name: "multinode-944570-m03", mac: "52:54:00:21:38:ac", ip: "192.168.39.83"} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:27:24 +0000 UTC Type:0 Mac:52:54:00:21:38:ac Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-944570-m03 Clientid:01:52:54:00:21:38:ac}
I0830 20:28:07.887056  244307 main.go:141] libmachine: (multinode-944570-m03) Reserved static IP address: 192.168.39.83
I0830 20:28:07.887076  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | skip adding static IP to network mk-multinode-944570 - found existing host DHCP lease matching {name: "multinode-944570-m03", mac: "52:54:00:21:38:ac", ip: "192.168.39.83"}
I0830 20:28:07.887096  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | Getting to WaitForSSH function...
I0830 20:28:07.887114  244307 main.go:141] libmachine: (multinode-944570-m03) Waiting for SSH to be available...
I0830 20:28:07.889355  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:07.889760  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:38:ac", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:27:24 +0000 UTC Type:0 Mac:52:54:00:21:38:ac Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-944570-m03 Clientid:01:52:54:00:21:38:ac}
I0830 20:28:07.889805  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined IP address 192.168.39.83 and MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:07.889875  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | Using SSH client type: external
I0830 20:28:07.889913  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570-m03/id_rsa (-rw-------)
I0830 20:28:07.889955  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.83 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
I0830 20:28:07.889971  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | About to run SSH command:
I0830 20:28:07.889986  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | exit 0
I0830 20:28:19.990768  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | SSH cmd err, output: <nil>: 
I0830 20:28:19.991228  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetConfigRaw
I0830 20:28:19.992027  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetIP
I0830 20:28:19.994736  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:19.995178  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:38:ac", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:28:18 +0000 UTC Type:0 Mac:52:54:00:21:38:ac Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-944570-m03 Clientid:01:52:54:00:21:38:ac}
I0830 20:28:19.995218  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined IP address 192.168.39.83 and MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:19.995566  244307 profile.go:148] Saving config to /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/config.json ...
I0830 20:28:19.995804  244307 machine.go:88] provisioning docker machine ...
I0830 20:28:19.995826  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .DriverName
I0830 20:28:19.996062  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetMachineName
I0830 20:28:19.996233  244307 buildroot.go:166] provisioning hostname "multinode-944570-m03"
I0830 20:28:19.996251  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetMachineName
I0830 20:28:19.996393  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHHostname
I0830 20:28:19.998799  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:19.999129  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:38:ac", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:28:18 +0000 UTC Type:0 Mac:52:54:00:21:38:ac Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-944570-m03 Clientid:01:52:54:00:21:38:ac}
I0830 20:28:19.999158  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined IP address 192.168.39.83 and MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:19.999322  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHPort
I0830 20:28:19.999531  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHKeyPath
I0830 20:28:19.999724  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHKeyPath
I0830 20:28:19.999869  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHUsername
I0830 20:28:20.000039  244307 main.go:141] libmachine: Using SSH client type: native
I0830 20:28:20.000672  244307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.83 22 <nil> <nil>}
I0830 20:28:20.000697  244307 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-944570-m03 && echo "multinode-944570-m03" | sudo tee /etc/hostname
I0830 20:28:20.138639  244307 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-944570-m03

                                                
                                                
I0830 20:28:20.138679  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHHostname
I0830 20:28:20.141577  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:20.142086  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:38:ac", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:28:18 +0000 UTC Type:0 Mac:52:54:00:21:38:ac Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-944570-m03 Clientid:01:52:54:00:21:38:ac}
I0830 20:28:20.142129  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined IP address 192.168.39.83 and MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:20.142250  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHPort
I0830 20:28:20.142466  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHKeyPath
I0830 20:28:20.142639  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHKeyPath
I0830 20:28:20.142749  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHUsername
I0830 20:28:20.142907  244307 main.go:141] libmachine: Using SSH client type: native
I0830 20:28:20.143334  244307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.83 22 <nil> <nil>}
I0830 20:28:20.143352  244307 main.go:141] libmachine: About to run SSH command:

                                                
                                                
		if ! grep -xq '.*\smultinode-944570-m03' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-944570-m03/g' /etc/hosts;
			else 
				echo '127.0.1.1 multinode-944570-m03' | sudo tee -a /etc/hosts; 
			fi
		fi
I0830 20:28:20.266328  244307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
I0830 20:28:20.266356  244307 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17145-222139/.minikube CaCertPath:/home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17145-222139/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17145-222139/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17145-222139/.minikube}
I0830 20:28:20.266393  244307 buildroot.go:174] setting up certificates
I0830 20:28:20.266406  244307 provision.go:83] configureAuth start
I0830 20:28:20.266420  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetMachineName
I0830 20:28:20.266734  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetIP
I0830 20:28:20.269497  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:20.269864  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:38:ac", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:28:18 +0000 UTC Type:0 Mac:52:54:00:21:38:ac Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-944570-m03 Clientid:01:52:54:00:21:38:ac}
I0830 20:28:20.269904  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined IP address 192.168.39.83 and MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:20.270090  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHHostname
I0830 20:28:20.272135  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:20.272553  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:38:ac", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:28:18 +0000 UTC Type:0 Mac:52:54:00:21:38:ac Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-944570-m03 Clientid:01:52:54:00:21:38:ac}
I0830 20:28:20.272582  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined IP address 192.168.39.83 and MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:20.272708  244307 provision.go:138] copyHostCerts
I0830 20:28:20.272767  244307 exec_runner.go:144] found /home/jenkins/minikube-integration/17145-222139/.minikube/cert.pem, removing ...
I0830 20:28:20.272777  244307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17145-222139/.minikube/cert.pem
I0830 20:28:20.272844  244307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17145-222139/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17145-222139/.minikube/cert.pem (1123 bytes)
I0830 20:28:20.272966  244307 exec_runner.go:144] found /home/jenkins/minikube-integration/17145-222139/.minikube/key.pem, removing ...
I0830 20:28:20.272976  244307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17145-222139/.minikube/key.pem
I0830 20:28:20.273002  244307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17145-222139/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17145-222139/.minikube/key.pem (1675 bytes)
I0830 20:28:20.273067  244307 exec_runner.go:144] found /home/jenkins/minikube-integration/17145-222139/.minikube/ca.pem, removing ...
I0830 20:28:20.273074  244307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17145-222139/.minikube/ca.pem
I0830 20:28:20.273094  244307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17145-222139/.minikube/ca.pem (1082 bytes)
I0830 20:28:20.273172  244307 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17145-222139/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca-key.pem org=jenkins.multinode-944570-m03 san=[192.168.39.83 192.168.39.83 localhost 127.0.0.1 minikube multinode-944570-m03]
I0830 20:28:20.393764  244307 provision.go:172] copyRemoteCerts
I0830 20:28:20.393820  244307 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0830 20:28:20.393844  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHHostname
I0830 20:28:20.396496  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:20.396831  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:38:ac", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:28:18 +0000 UTC Type:0 Mac:52:54:00:21:38:ac Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-944570-m03 Clientid:01:52:54:00:21:38:ac}
I0830 20:28:20.396864  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined IP address 192.168.39.83 and MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:20.397040  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHPort
I0830 20:28:20.397257  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHKeyPath
I0830 20:28:20.397412  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHUsername
I0830 20:28:20.397568  244307 sshutil.go:53] new ssh client: &{IP:192.168.39.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570-m03/id_rsa Username:docker}
I0830 20:28:20.484425  244307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0830 20:28:20.505011  244307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
I0830 20:28:20.525576  244307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0830 20:28:20.545797  244307 provision.go:86] duration metric: configureAuth took 279.365155ms
I0830 20:28:20.545834  244307 buildroot.go:189] setting minikube options for container-runtime
I0830 20:28:20.546069  244307 config.go:182] Loaded profile config "multinode-944570": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0830 20:28:20.546094  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .DriverName
I0830 20:28:20.546398  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHHostname
I0830 20:28:20.549013  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:20.549347  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:38:ac", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:28:18 +0000 UTC Type:0 Mac:52:54:00:21:38:ac Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-944570-m03 Clientid:01:52:54:00:21:38:ac}
I0830 20:28:20.549377  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined IP address 192.168.39.83 and MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:20.549558  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHPort
I0830 20:28:20.549744  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHKeyPath
I0830 20:28:20.549908  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHKeyPath
I0830 20:28:20.550025  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHUsername
I0830 20:28:20.550201  244307 main.go:141] libmachine: Using SSH client type: native
I0830 20:28:20.550580  244307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.83 22 <nil> <nil>}
I0830 20:28:20.550592  244307 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0830 20:28:20.669126  244307 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs

                                                
                                                
I0830 20:28:20.669159  244307 buildroot.go:70] root file system type: tmpfs
I0830 20:28:20.669312  244307 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0830 20:28:20.669338  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHHostname
I0830 20:28:20.671868  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:20.672232  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:38:ac", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:28:18 +0000 UTC Type:0 Mac:52:54:00:21:38:ac Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-944570-m03 Clientid:01:52:54:00:21:38:ac}
I0830 20:28:20.672257  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined IP address 192.168.39.83 and MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:20.672449  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHPort
I0830 20:28:20.672640  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHKeyPath
I0830 20:28:20.672815  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHKeyPath
I0830 20:28:20.672955  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHUsername
I0830 20:28:20.673120  244307 main.go:141] libmachine: Using SSH client type: native
I0830 20:28:20.673795  244307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.83 22 <nil> <nil>}
I0830 20:28:20.673892  244307 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket 
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP \$MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0830 20:28:20.799169  244307 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket 
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target

                                                
                                                
I0830 20:28:20.799231  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHHostname
I0830 20:28:20.802123  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:20.802501  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:38:ac", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:28:18 +0000 UTC Type:0 Mac:52:54:00:21:38:ac Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-944570-m03 Clientid:01:52:54:00:21:38:ac}
I0830 20:28:20.802530  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined IP address 192.168.39.83 and MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:20.802699  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHPort
I0830 20:28:20.802869  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHKeyPath
I0830 20:28:20.803010  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHKeyPath
I0830 20:28:20.803149  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHUsername
I0830 20:28:20.803394  244307 main.go:141] libmachine: Using SSH client type: native
I0830 20:28:20.803892  244307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.83 22 <nil> <nil>}
I0830 20:28:20.803918  244307 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0830 20:28:21.578444  244307 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.

                                                
                                                
I0830 20:28:21.578469  244307 machine.go:91] provisioned docker machine in 1.582651123s
I0830 20:28:21.578480  244307 start.go:300] post-start starting for "multinode-944570-m03" (driver="kvm2")
I0830 20:28:21.578490  244307 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0830 20:28:21.578511  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .DriverName
I0830 20:28:21.578900  244307 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0830 20:28:21.578942  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHHostname
I0830 20:28:21.581578  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:21.581969  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:38:ac", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:28:18 +0000 UTC Type:0 Mac:52:54:00:21:38:ac Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-944570-m03 Clientid:01:52:54:00:21:38:ac}
I0830 20:28:21.581997  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined IP address 192.168.39.83 and MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:21.582131  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHPort
I0830 20:28:21.582369  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHKeyPath
I0830 20:28:21.582565  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHUsername
I0830 20:28:21.582749  244307 sshutil.go:53] new ssh client: &{IP:192.168.39.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570-m03/id_rsa Username:docker}
I0830 20:28:21.668786  244307 ssh_runner.go:195] Run: cat /etc/os-release
I0830 20:28:21.672898  244307 info.go:137] Remote host: Buildroot 2021.02.12
I0830 20:28:21.672928  244307 filesync.go:126] Scanning /home/jenkins/minikube-integration/17145-222139/.minikube/addons for local assets ...
I0830 20:28:21.673010  244307 filesync.go:126] Scanning /home/jenkins/minikube-integration/17145-222139/.minikube/files for local assets ...
I0830 20:28:21.673094  244307 filesync.go:149] local asset: /home/jenkins/minikube-integration/17145-222139/.minikube/files/etc/ssl/certs/2293472.pem -> 2293472.pem in /etc/ssl/certs
I0830 20:28:21.673181  244307 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0830 20:28:21.682570  244307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/files/etc/ssl/certs/2293472.pem --> /etc/ssl/certs/2293472.pem (1708 bytes)
I0830 20:28:21.702791  244307 start.go:303] post-start completed in 124.296018ms
I0830 20:28:21.702818  244307 fix.go:56] fixHost completed within 15.104040753s
I0830 20:28:21.702845  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHHostname
I0830 20:28:21.705614  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:21.706051  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:38:ac", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:28:18 +0000 UTC Type:0 Mac:52:54:00:21:38:ac Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-944570-m03 Clientid:01:52:54:00:21:38:ac}
I0830 20:28:21.706103  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined IP address 192.168.39.83 and MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:21.706277  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHPort
I0830 20:28:21.706472  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHKeyPath
I0830 20:28:21.706649  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHKeyPath
I0830 20:28:21.706796  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHUsername
I0830 20:28:21.706949  244307 main.go:141] libmachine: Using SSH client type: native
I0830 20:28:21.707369  244307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.83 22 <nil> <nil>}
I0830 20:28:21.707382  244307 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0830 20:28:21.819938  244307 main.go:141] libmachine: SSH cmd err, output: <nil>: 1693427301.771008702

                                                
                                                
I0830 20:28:21.819966  244307 fix.go:206] guest clock: 1693427301.771008702
I0830 20:28:21.819973  244307 fix.go:219] Guest: 2023-08-30 20:28:21.771008702 +0000 UTC Remote: 2023-08-30 20:28:21.702822945 +0000 UTC m=+15.165600981 (delta=68.185757ms)
I0830 20:28:21.819993  244307 fix.go:190] guest clock delta is within tolerance: 68.185757ms
I0830 20:28:21.819998  244307 start.go:83] releasing machines lock for "multinode-944570-m03", held for 15.221231305s
I0830 20:28:21.820019  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .DriverName
I0830 20:28:21.820357  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetIP
I0830 20:28:21.823024  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:21.823407  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:38:ac", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:28:18 +0000 UTC Type:0 Mac:52:54:00:21:38:ac Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-944570-m03 Clientid:01:52:54:00:21:38:ac}
I0830 20:28:21.823431  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined IP address 192.168.39.83 and MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:21.823638  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .DriverName
I0830 20:28:21.824224  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .DriverName
I0830 20:28:21.824406  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .DriverName
I0830 20:28:21.824518  244307 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0830 20:28:21.824558  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHHostname
I0830 20:28:21.824642  244307 ssh_runner.go:195] Run: systemctl --version
I0830 20:28:21.824671  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHHostname
I0830 20:28:21.827280  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:21.827583  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:21.827738  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:38:ac", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:28:18 +0000 UTC Type:0 Mac:52:54:00:21:38:ac Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-944570-m03 Clientid:01:52:54:00:21:38:ac}
I0830 20:28:21.827775  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined IP address 192.168.39.83 and MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:21.827921  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHPort
I0830 20:28:21.828041  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:38:ac", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:28:18 +0000 UTC Type:0 Mac:52:54:00:21:38:ac Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-944570-m03 Clientid:01:52:54:00:21:38:ac}
I0830 20:28:21.828081  244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined IP address 192.168.39.83 and MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:21.828087  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHKeyPath
I0830 20:28:21.828217  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHPort
I0830 20:28:21.828321  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHUsername
I0830 20:28:21.828348  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHKeyPath
I0830 20:28:21.828494  244307 sshutil.go:53] new ssh client: &{IP:192.168.39.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570-m03/id_rsa Username:docker}
I0830 20:28:21.828560  244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHUsername
I0830 20:28:21.828679  244307 sshutil.go:53] new ssh client: &{IP:192.168.39.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570-m03/id_rsa Username:docker}
I0830 20:28:21.961459  244307 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0830 20:28:21.966821  244307 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0830 20:28:21.966920  244307 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0830 20:28:21.981519  244307 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0830 20:28:21.981544  244307 start.go:466] detecting cgroup driver to use...
I0830 20:28:21.981698  244307 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0830 20:28:21.998451  244307 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0830 20:28:22.007411  244307 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0830 20:28:22.016484  244307 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0830 20:28:22.016544  244307 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0830 20:28:22.025752  244307 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0830 20:28:22.034759  244307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0830 20:28:22.043923  244307 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0830 20:28:22.052964  244307 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0830 20:28:22.062283  244307 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0830 20:28:22.071333  244307 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0830 20:28:22.079597  244307 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0830 20:28:22.087564  244307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0830 20:28:22.189186  244307 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0830 20:28:22.206890  244307 start.go:466] detecting cgroup driver to use...
I0830 20:28:22.206994  244307 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0830 20:28:22.220310  244307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0830 20:28:22.231888  244307 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0830 20:28:22.247009  244307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0830 20:28:22.258664  244307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0830 20:28:22.269656  244307 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0830 20:28:22.300955  244307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0830 20:28:22.312769  244307 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0830 20:28:22.329383  244307 ssh_runner.go:195] Run: which cri-dockerd
I0830 20:28:22.332782  244307 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0830 20:28:22.340908  244307 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0830 20:28:22.354724  244307 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0830 20:28:22.470530  244307 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0830 20:28:22.573526  244307 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
I0830 20:28:22.573569  244307 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0830 20:28:22.590701  244307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0830 20:28:22.696373  244307 ssh_runner.go:195] Run: sudo systemctl restart docker
I0830 20:28:24.102132  244307 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.405720981s)
I0830 20:28:24.102211  244307 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0830 20:28:24.213758  244307 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0830 20:28:24.331361  244307 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0830 20:28:24.437979  244307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0830 20:28:24.557719  244307 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0830 20:28:24.572334  244307 out.go:177] 
W0830 20:28:24.573820  244307 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
stdout:

                                                
                                                
stderr:
Job failed. See "journalctl -xe" for details.

                                                
                                                
X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
stdout:

                                                
                                                
stderr:
Job failed. See "journalctl -xe" for details.

                                                
                                                
W0830 20:28:24.573836  244307 out.go:239] * 
* 
W0830 20:28:24.576192  244307 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log                    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log                    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0830 20:28:24.577631  244307 out.go:177] 
multinode_test.go:257: node start returned an error. args "out/minikube-linux-amd64 -p multinode-944570 node start m03 --alsologtostderr": exit status 90
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-944570 status
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-944570 status: exit status 2 (579.416171ms)

                                                
                                                
-- stdout --
	multinode-944570
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-944570-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-944570-m03
	type: Worker
	host: Running
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:263: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-944570 status" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-944570 -n multinode-944570
helpers_test.go:244: <<< TestMultiNode/serial/StartAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-944570 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-944570 logs -n 25: (1.088354527s)
helpers_test.go:252: TestMultiNode/serial/StartAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| cp      | multinode-944570 cp multinode-944570:/home/docker/cp-test.txt                           | multinode-944570 | jenkins | v1.31.2 | 30 Aug 23 20:27 UTC | 30 Aug 23 20:27 UTC |
	|         | multinode-944570-m03:/home/docker/cp-test_multinode-944570_multinode-944570-m03.txt     |                  |         |         |                     |                     |
	| ssh     | multinode-944570 ssh -n                                                                 | multinode-944570 | jenkins | v1.31.2 | 30 Aug 23 20:27 UTC | 30 Aug 23 20:27 UTC |
	|         | multinode-944570 sudo cat                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-944570 ssh -n multinode-944570-m03 sudo cat                                   | multinode-944570 | jenkins | v1.31.2 | 30 Aug 23 20:27 UTC | 30 Aug 23 20:27 UTC |
	|         | /home/docker/cp-test_multinode-944570_multinode-944570-m03.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-944570 cp testdata/cp-test.txt                                                | multinode-944570 | jenkins | v1.31.2 | 30 Aug 23 20:27 UTC | 30 Aug 23 20:27 UTC |
	|         | multinode-944570-m02:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-944570 ssh -n                                                                 | multinode-944570 | jenkins | v1.31.2 | 30 Aug 23 20:27 UTC | 30 Aug 23 20:27 UTC |
	|         | multinode-944570-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-944570 cp multinode-944570-m02:/home/docker/cp-test.txt                       | multinode-944570 | jenkins | v1.31.2 | 30 Aug 23 20:27 UTC | 30 Aug 23 20:27 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile109421544/001/cp-test_multinode-944570-m02.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-944570 ssh -n                                                                 | multinode-944570 | jenkins | v1.31.2 | 30 Aug 23 20:27 UTC | 30 Aug 23 20:27 UTC |
	|         | multinode-944570-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-944570 cp multinode-944570-m02:/home/docker/cp-test.txt                       | multinode-944570 | jenkins | v1.31.2 | 30 Aug 23 20:27 UTC | 30 Aug 23 20:27 UTC |
	|         | multinode-944570:/home/docker/cp-test_multinode-944570-m02_multinode-944570.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-944570 ssh -n                                                                 | multinode-944570 | jenkins | v1.31.2 | 30 Aug 23 20:27 UTC | 30 Aug 23 20:27 UTC |
	|         | multinode-944570-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-944570 ssh -n multinode-944570 sudo cat                                       | multinode-944570 | jenkins | v1.31.2 | 30 Aug 23 20:27 UTC | 30 Aug 23 20:27 UTC |
	|         | /home/docker/cp-test_multinode-944570-m02_multinode-944570.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-944570 cp multinode-944570-m02:/home/docker/cp-test.txt                       | multinode-944570 | jenkins | v1.31.2 | 30 Aug 23 20:27 UTC | 30 Aug 23 20:27 UTC |
	|         | multinode-944570-m03:/home/docker/cp-test_multinode-944570-m02_multinode-944570-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-944570 ssh -n                                                                 | multinode-944570 | jenkins | v1.31.2 | 30 Aug 23 20:27 UTC | 30 Aug 23 20:28 UTC |
	|         | multinode-944570-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-944570 ssh -n multinode-944570-m03 sudo cat                                   | multinode-944570 | jenkins | v1.31.2 | 30 Aug 23 20:28 UTC | 30 Aug 23 20:28 UTC |
	|         | /home/docker/cp-test_multinode-944570-m02_multinode-944570-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-944570 cp testdata/cp-test.txt                                                | multinode-944570 | jenkins | v1.31.2 | 30 Aug 23 20:28 UTC | 30 Aug 23 20:28 UTC |
	|         | multinode-944570-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-944570 ssh -n                                                                 | multinode-944570 | jenkins | v1.31.2 | 30 Aug 23 20:28 UTC | 30 Aug 23 20:28 UTC |
	|         | multinode-944570-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-944570 cp multinode-944570-m03:/home/docker/cp-test.txt                       | multinode-944570 | jenkins | v1.31.2 | 30 Aug 23 20:28 UTC | 30 Aug 23 20:28 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile109421544/001/cp-test_multinode-944570-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-944570 ssh -n                                                                 | multinode-944570 | jenkins | v1.31.2 | 30 Aug 23 20:28 UTC | 30 Aug 23 20:28 UTC |
	|         | multinode-944570-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-944570 cp multinode-944570-m03:/home/docker/cp-test.txt                       | multinode-944570 | jenkins | v1.31.2 | 30 Aug 23 20:28 UTC | 30 Aug 23 20:28 UTC |
	|         | multinode-944570:/home/docker/cp-test_multinode-944570-m03_multinode-944570.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-944570 ssh -n                                                                 | multinode-944570 | jenkins | v1.31.2 | 30 Aug 23 20:28 UTC | 30 Aug 23 20:28 UTC |
	|         | multinode-944570-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-944570 ssh -n multinode-944570 sudo cat                                       | multinode-944570 | jenkins | v1.31.2 | 30 Aug 23 20:28 UTC | 30 Aug 23 20:28 UTC |
	|         | /home/docker/cp-test_multinode-944570-m03_multinode-944570.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-944570 cp multinode-944570-m03:/home/docker/cp-test.txt                       | multinode-944570 | jenkins | v1.31.2 | 30 Aug 23 20:28 UTC | 30 Aug 23 20:28 UTC |
	|         | multinode-944570-m02:/home/docker/cp-test_multinode-944570-m03_multinode-944570-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-944570 ssh -n                                                                 | multinode-944570 | jenkins | v1.31.2 | 30 Aug 23 20:28 UTC | 30 Aug 23 20:28 UTC |
	|         | multinode-944570-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-944570 ssh -n multinode-944570-m02 sudo cat                                   | multinode-944570 | jenkins | v1.31.2 | 30 Aug 23 20:28 UTC | 30 Aug 23 20:28 UTC |
	|         | /home/docker/cp-test_multinode-944570-m03_multinode-944570-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-944570 node stop m03                                                          | multinode-944570 | jenkins | v1.31.2 | 30 Aug 23 20:28 UTC | 30 Aug 23 20:28 UTC |
	| node    | multinode-944570 node start                                                             | multinode-944570 | jenkins | v1.31.2 | 30 Aug 23 20:28 UTC |                     |
	|         | m03 --alsologtostderr                                                                   |                  |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/30 20:24:38
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0830 20:24:38.237538  241645 out.go:296] Setting OutFile to fd 1 ...
	I0830 20:24:38.237679  241645 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 20:24:38.237690  241645 out.go:309] Setting ErrFile to fd 2...
	I0830 20:24:38.237697  241645 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 20:24:38.237919  241645 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17145-222139/.minikube/bin
	I0830 20:24:38.238591  241645 out.go:303] Setting JSON to false
	I0830 20:24:38.239555  241645 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":7620,"bootTime":1693419458,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0830 20:24:38.239616  241645 start.go:138] virtualization: kvm guest
	I0830 20:24:38.241906  241645 out.go:177] * [multinode-944570] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0830 20:24:38.244008  241645 out.go:177]   - MINIKUBE_LOCATION=17145
	I0830 20:24:38.244037  241645 notify.go:220] Checking for updates...
	I0830 20:24:38.245609  241645 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0830 20:24:38.247196  241645 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17145-222139/kubeconfig
	I0830 20:24:38.248684  241645 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17145-222139/.minikube
	I0830 20:24:38.250032  241645 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0830 20:24:38.251947  241645 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0830 20:24:38.253529  241645 driver.go:373] Setting default libvirt URI to qemu:///system
	I0830 20:24:38.288180  241645 out.go:177] * Using the kvm2 driver based on user configuration
	I0830 20:24:38.289569  241645 start.go:298] selected driver: kvm2
	I0830 20:24:38.289588  241645 start.go:902] validating driver "kvm2" against <nil>
	I0830 20:24:38.289603  241645 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0830 20:24:38.290690  241645 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 20:24:38.290811  241645 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17145-222139/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0830 20:24:38.310813  241645 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0830 20:24:38.310865  241645 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0830 20:24:38.311070  241645 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0830 20:24:38.311106  241645 cni.go:84] Creating CNI manager for ""
	I0830 20:24:38.311119  241645 cni.go:136] 0 nodes found, recommending kindnet
	I0830 20:24:38.311124  241645 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0830 20:24:38.311134  241645 start_flags.go:319] config:
	{Name:multinode-944570 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-944570 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 20:24:38.311268  241645 iso.go:125] acquiring lock: {Name:mk193fbe19fd874a72f32d45bb0f490410c0429c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 20:24:38.313041  241645 out.go:177] * Starting control plane node multinode-944570 in cluster multinode-944570
	I0830 20:24:38.314356  241645 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0830 20:24:38.314383  241645 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17145-222139/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-amd64.tar.lz4
	I0830 20:24:38.314391  241645 cache.go:57] Caching tarball of preloaded images
	I0830 20:24:38.314457  241645 preload.go:174] Found /home/jenkins/minikube-integration/17145-222139/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0830 20:24:38.314467  241645 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0830 20:24:38.314760  241645 profile.go:148] Saving config to /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/config.json ...
	I0830 20:24:38.314780  241645 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/config.json: {Name:mk4f0b9157dab9cab07456fdbb9784414d74dbfb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 20:24:38.314908  241645 start.go:365] acquiring machines lock for multinode-944570: {Name:mk9a092bb7d2f42c1b785aa1d546d37ad26cec77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0830 20:24:38.314938  241645 start.go:369] acquired machines lock for "multinode-944570" in 15.217µs
	I0830 20:24:38.314954  241645 start.go:93] Provisioning new machine with config: &{Name:multinode-944570 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.1 ClusterName:multinode-944570 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0830 20:24:38.315006  241645 start.go:125] createHost starting for "" (driver="kvm2")
	I0830 20:24:38.316725  241645 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0830 20:24:38.316841  241645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0830 20:24:38.316868  241645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 20:24:38.330465  241645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39551
	I0830 20:24:38.330868  241645 main.go:141] libmachine: () Calling .GetVersion
	I0830 20:24:38.331433  241645 main.go:141] libmachine: Using API Version  1
	I0830 20:24:38.331457  241645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 20:24:38.332665  241645 main.go:141] libmachine: () Calling .GetMachineName
	I0830 20:24:38.333120  241645 main.go:141] libmachine: (multinode-944570) Calling .GetMachineName
	I0830 20:24:38.334046  241645 main.go:141] libmachine: (multinode-944570) Calling .DriverName
	I0830 20:24:38.334277  241645 start.go:159] libmachine.API.Create for "multinode-944570" (driver="kvm2")
	I0830 20:24:38.334312  241645 client.go:168] LocalClient.Create starting
	I0830 20:24:38.334342  241645 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca.pem
	I0830 20:24:38.334388  241645 main.go:141] libmachine: Decoding PEM data...
	I0830 20:24:38.334411  241645 main.go:141] libmachine: Parsing certificate...
	I0830 20:24:38.334477  241645 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17145-222139/.minikube/certs/cert.pem
	I0830 20:24:38.334504  241645 main.go:141] libmachine: Decoding PEM data...
	I0830 20:24:38.334520  241645 main.go:141] libmachine: Parsing certificate...
	I0830 20:24:38.334544  241645 main.go:141] libmachine: Running pre-create checks...
	I0830 20:24:38.334557  241645 main.go:141] libmachine: (multinode-944570) Calling .PreCreateCheck
	I0830 20:24:38.334891  241645 main.go:141] libmachine: (multinode-944570) Calling .GetConfigRaw
	I0830 20:24:38.335339  241645 main.go:141] libmachine: Creating machine...
	I0830 20:24:38.335356  241645 main.go:141] libmachine: (multinode-944570) Calling .Create
	I0830 20:24:38.335506  241645 main.go:141] libmachine: (multinode-944570) Creating KVM machine...
	I0830 20:24:38.336846  241645 main.go:141] libmachine: (multinode-944570) DBG | found existing default KVM network
	I0830 20:24:38.337568  241645 main.go:141] libmachine: (multinode-944570) DBG | I0830 20:24:38.337425  241668 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000029a00}
	I0830 20:24:38.342513  241645 main.go:141] libmachine: (multinode-944570) DBG | trying to create private KVM network mk-multinode-944570 192.168.39.0/24...
	I0830 20:24:38.421481  241645 main.go:141] libmachine: (multinode-944570) DBG | private KVM network mk-multinode-944570 192.168.39.0/24 created
	I0830 20:24:38.421520  241645 main.go:141] libmachine: (multinode-944570) Setting up store path in /home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570 ...
	I0830 20:24:38.421538  241645 main.go:141] libmachine: (multinode-944570) DBG | I0830 20:24:38.421419  241668 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17145-222139/.minikube
	I0830 20:24:38.421566  241645 main.go:141] libmachine: (multinode-944570) Building disk image from file:///home/jenkins/minikube-integration/17145-222139/.minikube/cache/iso/amd64/minikube-v1.31.0-1692872107-17120-amd64.iso
	I0830 20:24:38.421589  241645 main.go:141] libmachine: (multinode-944570) Downloading /home/jenkins/minikube-integration/17145-222139/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17145-222139/.minikube/cache/iso/amd64/minikube-v1.31.0-1692872107-17120-amd64.iso...
	I0830 20:24:38.658920  241645 main.go:141] libmachine: (multinode-944570) DBG | I0830 20:24:38.658756  241668 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570/id_rsa...
	I0830 20:24:38.856798  241645 main.go:141] libmachine: (multinode-944570) DBG | I0830 20:24:38.856648  241668 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570/multinode-944570.rawdisk...
	I0830 20:24:38.856835  241645 main.go:141] libmachine: (multinode-944570) DBG | Writing magic tar header
	I0830 20:24:38.856851  241645 main.go:141] libmachine: (multinode-944570) DBG | Writing SSH key tar header
	I0830 20:24:38.856863  241645 main.go:141] libmachine: (multinode-944570) DBG | I0830 20:24:38.856774  241668 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570 ...
	I0830 20:24:38.856875  241645 main.go:141] libmachine: (multinode-944570) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570
	I0830 20:24:38.856924  241645 main.go:141] libmachine: (multinode-944570) Setting executable bit set on /home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570 (perms=drwx------)
	I0830 20:24:38.856947  241645 main.go:141] libmachine: (multinode-944570) Setting executable bit set on /home/jenkins/minikube-integration/17145-222139/.minikube/machines (perms=drwxr-xr-x)
	I0830 20:24:38.856965  241645 main.go:141] libmachine: (multinode-944570) Setting executable bit set on /home/jenkins/minikube-integration/17145-222139/.minikube (perms=drwxr-xr-x)
	I0830 20:24:38.856989  241645 main.go:141] libmachine: (multinode-944570) Setting executable bit set on /home/jenkins/minikube-integration/17145-222139 (perms=drwxrwxr-x)
	I0830 20:24:38.857002  241645 main.go:141] libmachine: (multinode-944570) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17145-222139/.minikube/machines
	I0830 20:24:38.857013  241645 main.go:141] libmachine: (multinode-944570) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0830 20:24:38.857021  241645 main.go:141] libmachine: (multinode-944570) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0830 20:24:38.857028  241645 main.go:141] libmachine: (multinode-944570) Creating domain...
	I0830 20:24:38.857035  241645 main.go:141] libmachine: (multinode-944570) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17145-222139/.minikube
	I0830 20:24:38.857043  241645 main.go:141] libmachine: (multinode-944570) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17145-222139
	I0830 20:24:38.857049  241645 main.go:141] libmachine: (multinode-944570) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0830 20:24:38.857056  241645 main.go:141] libmachine: (multinode-944570) DBG | Checking permissions on dir: /home/jenkins
	I0830 20:24:38.857062  241645 main.go:141] libmachine: (multinode-944570) DBG | Checking permissions on dir: /home
	I0830 20:24:38.857070  241645 main.go:141] libmachine: (multinode-944570) DBG | Skipping /home - not owner
	I0830 20:24:38.858299  241645 main.go:141] libmachine: (multinode-944570) define libvirt domain using xml: 
	I0830 20:24:38.858338  241645 main.go:141] libmachine: (multinode-944570) <domain type='kvm'>
	I0830 20:24:38.858359  241645 main.go:141] libmachine: (multinode-944570)   <name>multinode-944570</name>
	I0830 20:24:38.858373  241645 main.go:141] libmachine: (multinode-944570)   <memory unit='MiB'>2200</memory>
	I0830 20:24:38.858380  241645 main.go:141] libmachine: (multinode-944570)   <vcpu>2</vcpu>
	I0830 20:24:38.858385  241645 main.go:141] libmachine: (multinode-944570)   <features>
	I0830 20:24:38.858391  241645 main.go:141] libmachine: (multinode-944570)     <acpi/>
	I0830 20:24:38.858398  241645 main.go:141] libmachine: (multinode-944570)     <apic/>
	I0830 20:24:38.858404  241645 main.go:141] libmachine: (multinode-944570)     <pae/>
	I0830 20:24:38.858413  241645 main.go:141] libmachine: (multinode-944570)     
	I0830 20:24:38.858425  241645 main.go:141] libmachine: (multinode-944570)   </features>
	I0830 20:24:38.858439  241645 main.go:141] libmachine: (multinode-944570)   <cpu mode='host-passthrough'>
	I0830 20:24:38.858468  241645 main.go:141] libmachine: (multinode-944570)   
	I0830 20:24:38.858493  241645 main.go:141] libmachine: (multinode-944570)   </cpu>
	I0830 20:24:38.858508  241645 main.go:141] libmachine: (multinode-944570)   <os>
	I0830 20:24:38.858524  241645 main.go:141] libmachine: (multinode-944570)     <type>hvm</type>
	I0830 20:24:38.858539  241645 main.go:141] libmachine: (multinode-944570)     <boot dev='cdrom'/>
	I0830 20:24:38.858552  241645 main.go:141] libmachine: (multinode-944570)     <boot dev='hd'/>
	I0830 20:24:38.858566  241645 main.go:141] libmachine: (multinode-944570)     <bootmenu enable='no'/>
	I0830 20:24:38.858578  241645 main.go:141] libmachine: (multinode-944570)   </os>
	I0830 20:24:38.858596  241645 main.go:141] libmachine: (multinode-944570)   <devices>
	I0830 20:24:38.858617  241645 main.go:141] libmachine: (multinode-944570)     <disk type='file' device='cdrom'>
	I0830 20:24:38.858636  241645 main.go:141] libmachine: (multinode-944570)       <source file='/home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570/boot2docker.iso'/>
	I0830 20:24:38.858652  241645 main.go:141] libmachine: (multinode-944570)       <target dev='hdc' bus='scsi'/>
	I0830 20:24:38.858667  241645 main.go:141] libmachine: (multinode-944570)       <readonly/>
	I0830 20:24:38.858679  241645 main.go:141] libmachine: (multinode-944570)     </disk>
	I0830 20:24:38.858695  241645 main.go:141] libmachine: (multinode-944570)     <disk type='file' device='disk'>
	I0830 20:24:38.858710  241645 main.go:141] libmachine: (multinode-944570)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0830 20:24:38.858779  241645 main.go:141] libmachine: (multinode-944570)       <source file='/home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570/multinode-944570.rawdisk'/>
	I0830 20:24:38.858815  241645 main.go:141] libmachine: (multinode-944570)       <target dev='hda' bus='virtio'/>
	I0830 20:24:38.858831  241645 main.go:141] libmachine: (multinode-944570)     </disk>
	I0830 20:24:38.858844  241645 main.go:141] libmachine: (multinode-944570)     <interface type='network'>
	I0830 20:24:38.858859  241645 main.go:141] libmachine: (multinode-944570)       <source network='mk-multinode-944570'/>
	I0830 20:24:38.858871  241645 main.go:141] libmachine: (multinode-944570)       <model type='virtio'/>
	I0830 20:24:38.858884  241645 main.go:141] libmachine: (multinode-944570)     </interface>
	I0830 20:24:38.858896  241645 main.go:141] libmachine: (multinode-944570)     <interface type='network'>
	I0830 20:24:38.858924  241645 main.go:141] libmachine: (multinode-944570)       <source network='default'/>
	I0830 20:24:38.858948  241645 main.go:141] libmachine: (multinode-944570)       <model type='virtio'/>
	I0830 20:24:38.858964  241645 main.go:141] libmachine: (multinode-944570)     </interface>
	I0830 20:24:38.858976  241645 main.go:141] libmachine: (multinode-944570)     <serial type='pty'>
	I0830 20:24:38.858990  241645 main.go:141] libmachine: (multinode-944570)       <target port='0'/>
	I0830 20:24:38.859001  241645 main.go:141] libmachine: (multinode-944570)     </serial>
	I0830 20:24:38.859021  241645 main.go:141] libmachine: (multinode-944570)     <console type='pty'>
	I0830 20:24:38.859037  241645 main.go:141] libmachine: (multinode-944570)       <target type='serial' port='0'/>
	I0830 20:24:38.859055  241645 main.go:141] libmachine: (multinode-944570)     </console>
	I0830 20:24:38.859067  241645 main.go:141] libmachine: (multinode-944570)     <rng model='virtio'>
	I0830 20:24:38.859082  241645 main.go:141] libmachine: (multinode-944570)       <backend model='random'>/dev/random</backend>
	I0830 20:24:38.859092  241645 main.go:141] libmachine: (multinode-944570)     </rng>
	I0830 20:24:38.859104  241645 main.go:141] libmachine: (multinode-944570)     
	I0830 20:24:38.859118  241645 main.go:141] libmachine: (multinode-944570)     
	I0830 20:24:38.859131  241645 main.go:141] libmachine: (multinode-944570)   </devices>
	I0830 20:24:38.859142  241645 main.go:141] libmachine: (multinode-944570) </domain>
	I0830 20:24:38.859157  241645 main.go:141] libmachine: (multinode-944570) 
	I0830 20:24:38.863828  241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:65:c5:b0 in network default
	I0830 20:24:38.864343  241645 main.go:141] libmachine: (multinode-944570) Ensuring networks are active...
	I0830 20:24:38.864366  241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:24:38.865057  241645 main.go:141] libmachine: (multinode-944570) Ensuring network default is active
	I0830 20:24:38.865375  241645 main.go:141] libmachine: (multinode-944570) Ensuring network mk-multinode-944570 is active
	I0830 20:24:38.865863  241645 main.go:141] libmachine: (multinode-944570) Getting domain xml...
	I0830 20:24:38.866477  241645 main.go:141] libmachine: (multinode-944570) Creating domain...
	I0830 20:24:40.088518  241645 main.go:141] libmachine: (multinode-944570) Waiting to get IP...
	I0830 20:24:40.089305  241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:24:40.089634  241645 main.go:141] libmachine: (multinode-944570) DBG | unable to find current IP address of domain multinode-944570 in network mk-multinode-944570
	I0830 20:24:40.089683  241645 main.go:141] libmachine: (multinode-944570) DBG | I0830 20:24:40.089612  241668 retry.go:31] will retry after 222.540492ms: waiting for machine to come up
	I0830 20:24:40.314007  241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:24:40.314535  241645 main.go:141] libmachine: (multinode-944570) DBG | unable to find current IP address of domain multinode-944570 in network mk-multinode-944570
	I0830 20:24:40.314560  241645 main.go:141] libmachine: (multinode-944570) DBG | I0830 20:24:40.314475  241668 retry.go:31] will retry after 290.614479ms: waiting for machine to come up
	I0830 20:24:40.607022  241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:24:40.607398  241645 main.go:141] libmachine: (multinode-944570) DBG | unable to find current IP address of domain multinode-944570 in network mk-multinode-944570
	I0830 20:24:40.607422  241645 main.go:141] libmachine: (multinode-944570) DBG | I0830 20:24:40.607367  241668 retry.go:31] will retry after 406.297764ms: waiting for machine to come up
	I0830 20:24:41.014923  241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:24:41.015410  241645 main.go:141] libmachine: (multinode-944570) DBG | unable to find current IP address of domain multinode-944570 in network mk-multinode-944570
	I0830 20:24:41.015444  241645 main.go:141] libmachine: (multinode-944570) DBG | I0830 20:24:41.015372  241668 retry.go:31] will retry after 516.548653ms: waiting for machine to come up
	I0830 20:24:41.533085  241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:24:41.533545  241645 main.go:141] libmachine: (multinode-944570) DBG | unable to find current IP address of domain multinode-944570 in network mk-multinode-944570
	I0830 20:24:41.533568  241645 main.go:141] libmachine: (multinode-944570) DBG | I0830 20:24:41.533486  241668 retry.go:31] will retry after 758.9067ms: waiting for machine to come up
	I0830 20:24:42.293602  241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:24:42.294014  241645 main.go:141] libmachine: (multinode-944570) DBG | unable to find current IP address of domain multinode-944570 in network mk-multinode-944570
	I0830 20:24:42.294047  241645 main.go:141] libmachine: (multinode-944570) DBG | I0830 20:24:42.293953  241668 retry.go:31] will retry after 639.466704ms: waiting for machine to come up
	I0830 20:24:42.934908  241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:24:42.935382  241645 main.go:141] libmachine: (multinode-944570) DBG | unable to find current IP address of domain multinode-944570 in network mk-multinode-944570
	I0830 20:24:42.935411  241645 main.go:141] libmachine: (multinode-944570) DBG | I0830 20:24:42.935332  241668 retry.go:31] will retry after 880.132321ms: waiting for machine to come up
	I0830 20:24:43.817512  241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:24:43.818048  241645 main.go:141] libmachine: (multinode-944570) DBG | unable to find current IP address of domain multinode-944570 in network mk-multinode-944570
	I0830 20:24:43.818075  241645 main.go:141] libmachine: (multinode-944570) DBG | I0830 20:24:43.818003  241668 retry.go:31] will retry after 908.818154ms: waiting for machine to come up
	I0830 20:24:44.728538  241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:24:44.729000  241645 main.go:141] libmachine: (multinode-944570) DBG | unable to find current IP address of domain multinode-944570 in network mk-multinode-944570
	I0830 20:24:44.729025  241645 main.go:141] libmachine: (multinode-944570) DBG | I0830 20:24:44.728941  241668 retry.go:31] will retry after 1.123347298s: waiting for machine to come up
	I0830 20:24:45.854259  241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:24:45.854692  241645 main.go:141] libmachine: (multinode-944570) DBG | unable to find current IP address of domain multinode-944570 in network mk-multinode-944570
	I0830 20:24:45.854716  241645 main.go:141] libmachine: (multinode-944570) DBG | I0830 20:24:45.854639  241668 retry.go:31] will retry after 1.502405087s: waiting for machine to come up
	I0830 20:24:47.359507  241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:24:47.359928  241645 main.go:141] libmachine: (multinode-944570) DBG | unable to find current IP address of domain multinode-944570 in network mk-multinode-944570
	I0830 20:24:47.359957  241645 main.go:141] libmachine: (multinode-944570) DBG | I0830 20:24:47.359886  241668 retry.go:31] will retry after 1.968504913s: waiting for machine to come up
	I0830 20:24:49.330159  241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:24:49.330610  241645 main.go:141] libmachine: (multinode-944570) DBG | unable to find current IP address of domain multinode-944570 in network mk-multinode-944570
	I0830 20:24:49.330645  241645 main.go:141] libmachine: (multinode-944570) DBG | I0830 20:24:49.330580  241668 retry.go:31] will retry after 2.700334878s: waiting for machine to come up
	I0830 20:24:52.034447  241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:24:52.034943  241645 main.go:141] libmachine: (multinode-944570) DBG | unable to find current IP address of domain multinode-944570 in network mk-multinode-944570
	I0830 20:24:52.034967  241645 main.go:141] libmachine: (multinode-944570) DBG | I0830 20:24:52.034881  241668 retry.go:31] will retry after 3.66452335s: waiting for machine to come up
	I0830 20:24:55.702938  241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:24:55.703375  241645 main.go:141] libmachine: (multinode-944570) DBG | unable to find current IP address of domain multinode-944570 in network mk-multinode-944570
	I0830 20:24:55.703398  241645 main.go:141] libmachine: (multinode-944570) DBG | I0830 20:24:55.703350  241668 retry.go:31] will retry after 5.039181171s: waiting for machine to come up
	I0830 20:25:00.745412  241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:25:00.745948  241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has current primary IP address 192.168.39.254 and MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:25:00.745970  241645 main.go:141] libmachine: (multinode-944570) Found IP for machine: 192.168.39.254
	I0830 20:25:00.745980  241645 main.go:141] libmachine: (multinode-944570) Reserving static IP address...
	I0830 20:25:00.746463  241645 main.go:141] libmachine: (multinode-944570) DBG | unable to find host DHCP lease matching {name: "multinode-944570", mac: "52:54:00:50:42:84", ip: "192.168.39.254"} in network mk-multinode-944570
	I0830 20:25:00.820259  241645 main.go:141] libmachine: (multinode-944570) DBG | Getting to WaitForSSH function...
	I0830 20:25:00.820288  241645 main.go:141] libmachine: (multinode-944570) Reserved static IP address: 192.168.39.254
	I0830 20:25:00.820302  241645 main.go:141] libmachine: (multinode-944570) Waiting for SSH to be available...
	I0830 20:25:00.822903  241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:25:00.823346  241645 main.go:141] libmachine: (multinode-944570) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:42:84", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:24:53 +0000 UTC Type:0 Mac:52:54:00:50:42:84 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:minikube Clientid:01:52:54:00:50:42:84}
	I0830 20:25:00.823379  241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined IP address 192.168.39.254 and MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:25:00.823556  241645 main.go:141] libmachine: (multinode-944570) DBG | Using SSH client type: external
	I0830 20:25:00.823580  241645 main.go:141] libmachine: (multinode-944570) DBG | Using SSH private key: /home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570/id_rsa (-rw-------)
	I0830 20:25:00.823619  241645 main.go:141] libmachine: (multinode-944570) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.254 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0830 20:25:00.823652  241645 main.go:141] libmachine: (multinode-944570) DBG | About to run SSH command:
	I0830 20:25:00.823674  241645 main.go:141] libmachine: (multinode-944570) DBG | exit 0
	I0830 20:25:00.918852  241645 main.go:141] libmachine: (multinode-944570) DBG | SSH cmd err, output: <nil>: 
	I0830 20:25:00.919136  241645 main.go:141] libmachine: (multinode-944570) KVM machine creation complete!
	I0830 20:25:00.919547  241645 main.go:141] libmachine: (multinode-944570) Calling .GetConfigRaw
	I0830 20:25:00.920154  241645 main.go:141] libmachine: (multinode-944570) Calling .DriverName
	I0830 20:25:00.920376  241645 main.go:141] libmachine: (multinode-944570) Calling .DriverName
	I0830 20:25:00.920544  241645 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0830 20:25:00.920562  241645 main.go:141] libmachine: (multinode-944570) Calling .GetState
	I0830 20:25:00.922044  241645 main.go:141] libmachine: Detecting operating system of created instance...
	I0830 20:25:00.922061  241645 main.go:141] libmachine: Waiting for SSH to be available...
	I0830 20:25:00.922070  241645 main.go:141] libmachine: Getting to WaitForSSH function...
	I0830 20:25:00.922080  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHHostname
	I0830 20:25:00.924403  241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:25:00.924775  241645 main.go:141] libmachine: (multinode-944570) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:42:84", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:24:53 +0000 UTC Type:0 Mac:52:54:00:50:42:84 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-944570 Clientid:01:52:54:00:50:42:84}
	I0830 20:25:00.924819  241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined IP address 192.168.39.254 and MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:25:00.924928  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHPort
	I0830 20:25:00.925120  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHKeyPath
	I0830 20:25:00.925287  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHKeyPath
	I0830 20:25:00.925414  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHUsername
	I0830 20:25:00.925582  241645 main.go:141] libmachine: Using SSH client type: native
	I0830 20:25:00.926249  241645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.254 22 <nil> <nil>}
	I0830 20:25:00.926271  241645 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0830 20:25:01.054153  241645 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 20:25:01.054177  241645 main.go:141] libmachine: Detecting the provisioner...
	I0830 20:25:01.054195  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHHostname
	I0830 20:25:01.056963  241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:25:01.057363  241645 main.go:141] libmachine: (multinode-944570) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:42:84", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:24:53 +0000 UTC Type:0 Mac:52:54:00:50:42:84 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-944570 Clientid:01:52:54:00:50:42:84}
	I0830 20:25:01.057400  241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined IP address 192.168.39.254 and MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:25:01.057515  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHPort
	I0830 20:25:01.057711  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHKeyPath
	I0830 20:25:01.057884  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHKeyPath
	I0830 20:25:01.058065  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHUsername
	I0830 20:25:01.058226  241645 main.go:141] libmachine: Using SSH client type: native
	I0830 20:25:01.058617  241645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.254 22 <nil> <nil>}
	I0830 20:25:01.058630  241645 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0830 20:25:01.191831  241645 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g88b5c50-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0830 20:25:01.191899  241645 main.go:141] libmachine: found compatible host: buildroot
	I0830 20:25:01.191915  241645 main.go:141] libmachine: Provisioning with buildroot...
	I0830 20:25:01.191931  241645 main.go:141] libmachine: (multinode-944570) Calling .GetMachineName
	I0830 20:25:01.192233  241645 buildroot.go:166] provisioning hostname "multinode-944570"
	I0830 20:25:01.192266  241645 main.go:141] libmachine: (multinode-944570) Calling .GetMachineName
	I0830 20:25:01.192471  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHHostname
	I0830 20:25:01.194982  241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:25:01.195339  241645 main.go:141] libmachine: (multinode-944570) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:42:84", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:24:53 +0000 UTC Type:0 Mac:52:54:00:50:42:84 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-944570 Clientid:01:52:54:00:50:42:84}
	I0830 20:25:01.195370  241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined IP address 192.168.39.254 and MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:25:01.195504  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHPort
	I0830 20:25:01.195701  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHKeyPath
	I0830 20:25:01.195854  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHKeyPath
	I0830 20:25:01.195955  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHUsername
	I0830 20:25:01.196145  241645 main.go:141] libmachine: Using SSH client type: native
	I0830 20:25:01.196528  241645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.254 22 <nil> <nil>}
	I0830 20:25:01.196542  241645 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-944570 && echo "multinode-944570" | sudo tee /etc/hostname
	I0830 20:25:01.338059  241645 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-944570
	
	I0830 20:25:01.338086  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHHostname
	I0830 20:25:01.341056  241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:25:01.341394  241645 main.go:141] libmachine: (multinode-944570) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:42:84", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:24:53 +0000 UTC Type:0 Mac:52:54:00:50:42:84 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-944570 Clientid:01:52:54:00:50:42:84}
	I0830 20:25:01.341438  241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined IP address 192.168.39.254 and MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:25:01.341638  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHPort
	I0830 20:25:01.341835  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHKeyPath
	I0830 20:25:01.342008  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHKeyPath
	I0830 20:25:01.342173  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHUsername
	I0830 20:25:01.342385  241645 main.go:141] libmachine: Using SSH client type: native
	I0830 20:25:01.342775  241645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.254 22 <nil> <nil>}
	I0830 20:25:01.342792  241645 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-944570' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-944570/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-944570' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0830 20:25:01.483104  241645 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 20:25:01.483131  241645 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17145-222139/.minikube CaCertPath:/home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17145-222139/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17145-222139/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17145-222139/.minikube}
	I0830 20:25:01.483173  241645 buildroot.go:174] setting up certificates
	I0830 20:25:01.483183  241645 provision.go:83] configureAuth start
	I0830 20:25:01.483195  241645 main.go:141] libmachine: (multinode-944570) Calling .GetMachineName
	I0830 20:25:01.483529  241645 main.go:141] libmachine: (multinode-944570) Calling .GetIP
	I0830 20:25:01.486542  241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:25:01.486968  241645 main.go:141] libmachine: (multinode-944570) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:42:84", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:24:53 +0000 UTC Type:0 Mac:52:54:00:50:42:84 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-944570 Clientid:01:52:54:00:50:42:84}
	I0830 20:25:01.487005  241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined IP address 192.168.39.254 and MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:25:01.487211  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHHostname
	I0830 20:25:01.489871  241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:25:01.490219  241645 main.go:141] libmachine: (multinode-944570) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:42:84", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:24:53 +0000 UTC Type:0 Mac:52:54:00:50:42:84 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-944570 Clientid:01:52:54:00:50:42:84}
	I0830 20:25:01.490270  241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined IP address 192.168.39.254 and MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:25:01.490408  241645 provision.go:138] copyHostCerts
	I0830 20:25:01.490452  241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17145-222139/.minikube/cert.pem
	I0830 20:25:01.490491  241645 exec_runner.go:144] found /home/jenkins/minikube-integration/17145-222139/.minikube/cert.pem, removing ...
	I0830 20:25:01.490503  241645 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17145-222139/.minikube/cert.pem
	I0830 20:25:01.490583  241645 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17145-222139/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17145-222139/.minikube/cert.pem (1123 bytes)
	I0830 20:25:01.490707  241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17145-222139/.minikube/key.pem
	I0830 20:25:01.490735  241645 exec_runner.go:144] found /home/jenkins/minikube-integration/17145-222139/.minikube/key.pem, removing ...
	I0830 20:25:01.490742  241645 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17145-222139/.minikube/key.pem
	I0830 20:25:01.490783  241645 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17145-222139/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17145-222139/.minikube/key.pem (1675 bytes)
	I0830 20:25:01.490844  241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17145-222139/.minikube/ca.pem
	I0830 20:25:01.490866  241645 exec_runner.go:144] found /home/jenkins/minikube-integration/17145-222139/.minikube/ca.pem, removing ...
	I0830 20:25:01.490875  241645 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17145-222139/.minikube/ca.pem
	I0830 20:25:01.490906  241645 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17145-222139/.minikube/ca.pem (1082 bytes)
	I0830 20:25:01.490969  241645 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17145-222139/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca-key.pem org=jenkins.multinode-944570 san=[192.168.39.254 192.168.39.254 localhost 127.0.0.1 minikube multinode-944570]
	I0830 20:25:01.709034  241645 provision.go:172] copyRemoteCerts
	I0830 20:25:01.709108  241645 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0830 20:25:01.709147  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHHostname
	I0830 20:25:01.711738  241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:25:01.712084  241645 main.go:141] libmachine: (multinode-944570) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:42:84", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:24:53 +0000 UTC Type:0 Mac:52:54:00:50:42:84 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-944570 Clientid:01:52:54:00:50:42:84}
	I0830 20:25:01.712124  241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined IP address 192.168.39.254 and MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:25:01.712279  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHPort
	I0830 20:25:01.712503  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHKeyPath
	I0830 20:25:01.712682  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHUsername
	I0830 20:25:01.712851  241645 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570/id_rsa Username:docker}
	I0830 20:25:01.809341  241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0830 20:25:01.809417  241645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0830 20:25:01.831592  241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0830 20:25:01.831657  241645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0830 20:25:01.853695  241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0830 20:25:01.853768  241645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0830 20:25:01.873313  241645 provision.go:86] duration metric: configureAuth took 390.114671ms
	I0830 20:25:01.873335  241645 buildroot.go:189] setting minikube options for container-runtime
	I0830 20:25:01.873493  241645 config.go:182] Loaded profile config "multinode-944570": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0830 20:25:01.873517  241645 main.go:141] libmachine: (multinode-944570) Calling .DriverName
	I0830 20:25:01.873813  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHHostname
	I0830 20:25:01.876220  241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:25:01.876551  241645 main.go:141] libmachine: (multinode-944570) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:42:84", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:24:53 +0000 UTC Type:0 Mac:52:54:00:50:42:84 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-944570 Clientid:01:52:54:00:50:42:84}
	I0830 20:25:01.876595  241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined IP address 192.168.39.254 and MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:25:01.876794  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHPort
	I0830 20:25:01.876992  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHKeyPath
	I0830 20:25:01.877188  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHKeyPath
	I0830 20:25:01.877389  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHUsername
	I0830 20:25:01.877561  241645 main.go:141] libmachine: Using SSH client type: native
	I0830 20:25:01.877971  241645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.254 22 <nil> <nil>}
	I0830 20:25:01.877988  241645 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0830 20:25:02.008621  241645 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0830 20:25:02.008644  241645 buildroot.go:70] root file system type: tmpfs
	I0830 20:25:02.008767  241645 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0830 20:25:02.008785  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHHostname
	I0830 20:25:02.011410  241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:25:02.011756  241645 main.go:141] libmachine: (multinode-944570) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:42:84", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:24:53 +0000 UTC Type:0 Mac:52:54:00:50:42:84 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-944570 Clientid:01:52:54:00:50:42:84}
	I0830 20:25:02.011782  241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined IP address 192.168.39.254 and MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:25:02.011918  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHPort
	I0830 20:25:02.012094  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHKeyPath
	I0830 20:25:02.012232  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHKeyPath
	I0830 20:25:02.012360  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHUsername
	I0830 20:25:02.012523  241645 main.go:141] libmachine: Using SSH client type: native
	I0830 20:25:02.012908  241645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.254 22 <nil> <nil>}
	I0830 20:25:02.012966  241645 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0830 20:25:02.156549  241645 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0830 20:25:02.156582  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHHostname
	I0830 20:25:02.159519  241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:25:02.159924  241645 main.go:141] libmachine: (multinode-944570) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:42:84", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:24:53 +0000 UTC Type:0 Mac:52:54:00:50:42:84 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-944570 Clientid:01:52:54:00:50:42:84}
	I0830 20:25:02.159957  241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined IP address 192.168.39.254 and MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:25:02.160223  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHPort
	I0830 20:25:02.160457  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHKeyPath
	I0830 20:25:02.160635  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHKeyPath
	I0830 20:25:02.160768  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHUsername
	I0830 20:25:02.160985  241645 main.go:141] libmachine: Using SSH client type: native
	I0830 20:25:02.161389  241645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.254 22 <nil> <nil>}
	I0830 20:25:02.161408  241645 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0830 20:25:02.899373  241645 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0830 20:25:02.899405  241645 main.go:141] libmachine: Checking connection to Docker...
	I0830 20:25:02.899418  241645 main.go:141] libmachine: (multinode-944570) Calling .GetURL
	I0830 20:25:02.900707  241645 main.go:141] libmachine: (multinode-944570) DBG | Using libvirt version 6000000
	I0830 20:25:02.902913  241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:25:02.903249  241645 main.go:141] libmachine: (multinode-944570) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:42:84", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:24:53 +0000 UTC Type:0 Mac:52:54:00:50:42:84 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-944570 Clientid:01:52:54:00:50:42:84}
	I0830 20:25:02.903277  241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined IP address 192.168.39.254 and MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:25:02.903449  241645 main.go:141] libmachine: Docker is up and running!
	I0830 20:25:02.903468  241645 main.go:141] libmachine: Reticulating splines...
	I0830 20:25:02.903476  241645 client.go:171] LocalClient.Create took 24.569157111s
	I0830 20:25:02.903500  241645 start.go:167] duration metric: libmachine.API.Create for "multinode-944570" took 24.569226582s
	I0830 20:25:02.903510  241645 start.go:300] post-start starting for "multinode-944570" (driver="kvm2")
	I0830 20:25:02.903519  241645 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0830 20:25:02.903541  241645 main.go:141] libmachine: (multinode-944570) Calling .DriverName
	I0830 20:25:02.903865  241645 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0830 20:25:02.903890  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHHostname
	I0830 20:25:02.906005  241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:25:02.906328  241645 main.go:141] libmachine: (multinode-944570) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:42:84", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:24:53 +0000 UTC Type:0 Mac:52:54:00:50:42:84 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-944570 Clientid:01:52:54:00:50:42:84}
	I0830 20:25:02.906361  241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined IP address 192.168.39.254 and MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:25:02.906513  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHPort
	I0830 20:25:02.906744  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHKeyPath
	I0830 20:25:02.906942  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHUsername
	I0830 20:25:02.907118  241645 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570/id_rsa Username:docker}
	I0830 20:25:02.999889  241645 ssh_runner.go:195] Run: cat /etc/os-release
	I0830 20:25:03.003463  241645 command_runner.go:130] > NAME=Buildroot
	I0830 20:25:03.003489  241645 command_runner.go:130] > VERSION=2021.02.12-1-g88b5c50-dirty
	I0830 20:25:03.003493  241645 command_runner.go:130] > ID=buildroot
	I0830 20:25:03.003499  241645 command_runner.go:130] > VERSION_ID=2021.02.12
	I0830 20:25:03.003503  241645 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0830 20:25:03.003551  241645 info.go:137] Remote host: Buildroot 2021.02.12
	I0830 20:25:03.003575  241645 filesync.go:126] Scanning /home/jenkins/minikube-integration/17145-222139/.minikube/addons for local assets ...
	I0830 20:25:03.003658  241645 filesync.go:126] Scanning /home/jenkins/minikube-integration/17145-222139/.minikube/files for local assets ...
	I0830 20:25:03.003750  241645 filesync.go:149] local asset: /home/jenkins/minikube-integration/17145-222139/.minikube/files/etc/ssl/certs/2293472.pem -> 2293472.pem in /etc/ssl/certs
	I0830 20:25:03.003761  241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/files/etc/ssl/certs/2293472.pem -> /etc/ssl/certs/2293472.pem
	I0830 20:25:03.003837  241645 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0830 20:25:03.011525  241645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/files/etc/ssl/certs/2293472.pem --> /etc/ssl/certs/2293472.pem (1708 bytes)
	I0830 20:25:03.032042  241645 start.go:303] post-start completed in 128.515897ms
	I0830 20:25:03.032101  241645 main.go:141] libmachine: (multinode-944570) Calling .GetConfigRaw
	I0830 20:25:03.032744  241645 main.go:141] libmachine: (multinode-944570) Calling .GetIP
	I0830 20:25:03.035354  241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:25:03.035725  241645 main.go:141] libmachine: (multinode-944570) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:42:84", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:24:53 +0000 UTC Type:0 Mac:52:54:00:50:42:84 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-944570 Clientid:01:52:54:00:50:42:84}
	I0830 20:25:03.035764  241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined IP address 192.168.39.254 and MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:25:03.035980  241645 profile.go:148] Saving config to /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/config.json ...
	I0830 20:25:03.036145  241645 start.go:128] duration metric: createHost completed in 24.721130412s
	I0830 20:25:03.036175  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHHostname
	I0830 20:25:03.038222  241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:25:03.038509  241645 main.go:141] libmachine: (multinode-944570) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:42:84", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:24:53 +0000 UTC Type:0 Mac:52:54:00:50:42:84 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-944570 Clientid:01:52:54:00:50:42:84}
	I0830 20:25:03.038538  241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined IP address 192.168.39.254 and MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:25:03.038684  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHPort
	I0830 20:25:03.038880  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHKeyPath
	I0830 20:25:03.039021  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHKeyPath
	I0830 20:25:03.039182  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHUsername
	I0830 20:25:03.039346  241645 main.go:141] libmachine: Using SSH client type: native
	I0830 20:25:03.039785  241645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.254 22 <nil> <nil>}
	I0830 20:25:03.039799  241645 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0830 20:25:03.171749  241645 main.go:141] libmachine: SSH cmd err, output: <nil>: 1693427103.145508549
	
	I0830 20:25:03.171775  241645 fix.go:206] guest clock: 1693427103.145508549
	I0830 20:25:03.171783  241645 fix.go:219] Guest: 2023-08-30 20:25:03.145508549 +0000 UTC Remote: 2023-08-30 20:25:03.036163347 +0000 UTC m=+24.831539919 (delta=109.345202ms)
	I0830 20:25:03.171803  241645 fix.go:190] guest clock delta is within tolerance: 109.345202ms
	I0830 20:25:03.171810  241645 start.go:83] releasing machines lock for "multinode-944570", held for 24.856863444s
	I0830 20:25:03.171828  241645 main.go:141] libmachine: (multinode-944570) Calling .DriverName
	I0830 20:25:03.172092  241645 main.go:141] libmachine: (multinode-944570) Calling .GetIP
	I0830 20:25:03.174430  241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:25:03.174803  241645 main.go:141] libmachine: (multinode-944570) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:42:84", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:24:53 +0000 UTC Type:0 Mac:52:54:00:50:42:84 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-944570 Clientid:01:52:54:00:50:42:84}
	I0830 20:25:03.174828  241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined IP address 192.168.39.254 and MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:25:03.174993  241645 main.go:141] libmachine: (multinode-944570) Calling .DriverName
	I0830 20:25:03.175589  241645 main.go:141] libmachine: (multinode-944570) Calling .DriverName
	I0830 20:25:03.175764  241645 main.go:141] libmachine: (multinode-944570) Calling .DriverName
	I0830 20:25:03.175840  241645 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0830 20:25:03.175904  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHHostname
	I0830 20:25:03.176013  241645 ssh_runner.go:195] Run: cat /version.json
	I0830 20:25:03.176037  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHHostname
	I0830 20:25:03.178485  241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:25:03.178855  241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:25:03.178876  241645 main.go:141] libmachine: (multinode-944570) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:42:84", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:24:53 +0000 UTC Type:0 Mac:52:54:00:50:42:84 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-944570 Clientid:01:52:54:00:50:42:84}
	I0830 20:25:03.178891  241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined IP address 192.168.39.254 and MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:25:03.179065  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHPort
	I0830 20:25:03.179257  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHKeyPath
	I0830 20:25:03.179376  241645 main.go:141] libmachine: (multinode-944570) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:42:84", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:24:53 +0000 UTC Type:0 Mac:52:54:00:50:42:84 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-944570 Clientid:01:52:54:00:50:42:84}
	I0830 20:25:03.179404  241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined IP address 192.168.39.254 and MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:25:03.179413  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHUsername
	I0830 20:25:03.179591  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHPort
	I0830 20:25:03.179590  241645 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570/id_rsa Username:docker}
	I0830 20:25:03.179756  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHKeyPath
	I0830 20:25:03.180044  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHUsername
	I0830 20:25:03.180191  241645 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570/id_rsa Username:docker}
	I0830 20:25:03.302799  241645 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0830 20:25:03.302869  241645 command_runner.go:130] > {"iso_version": "v1.31.0-1692872107-17120", "kicbase_version": "v0.0.40-1692613578-17086", "minikube_version": "v1.31.2", "commit": "9dc31f0284dc1a8a35859648c60120733f0f8296"}
	I0830 20:25:03.303015  241645 ssh_runner.go:195] Run: systemctl --version
	I0830 20:25:03.307972  241645 command_runner.go:130] > systemd 247 (247)
	I0830 20:25:03.308007  241645 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0830 20:25:03.308356  241645 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0830 20:25:03.313222  241645 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0830 20:25:03.313435  241645 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0830 20:25:03.313503  241645 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0830 20:25:03.327116  241645 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0830 20:25:03.327378  241645 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0830 20:25:03.327402  241645 start.go:466] detecting cgroup driver to use...
	I0830 20:25:03.327619  241645 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0830 20:25:03.343837  241645 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0830 20:25:03.344435  241645 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0830 20:25:03.353064  241645 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0830 20:25:03.361596  241645 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0830 20:25:03.361651  241645 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0830 20:25:03.370242  241645 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0830 20:25:03.378852  241645 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0830 20:25:03.387260  241645 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0830 20:25:03.396213  241645 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0830 20:25:03.404744  241645 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0830 20:25:03.413143  241645 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0830 20:25:03.420408  241645 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0830 20:25:03.420483  241645 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0830 20:25:03.427869  241645 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0830 20:25:03.524462  241645 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0830 20:25:03.541100  241645 start.go:466] detecting cgroup driver to use...
	I0830 20:25:03.541187  241645 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0830 20:25:03.563244  241645 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0830 20:25:03.563357  241645 command_runner.go:130] > [Unit]
	I0830 20:25:03.563386  241645 command_runner.go:130] > Description=Docker Application Container Engine
	I0830 20:25:03.563396  241645 command_runner.go:130] > Documentation=https://docs.docker.com
	I0830 20:25:03.563408  241645 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0830 20:25:03.563418  241645 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0830 20:25:03.563426  241645 command_runner.go:130] > StartLimitBurst=3
	I0830 20:25:03.563436  241645 command_runner.go:130] > StartLimitIntervalSec=60
	I0830 20:25:03.563445  241645 command_runner.go:130] > [Service]
	I0830 20:25:03.563452  241645 command_runner.go:130] > Type=notify
	I0830 20:25:03.563461  241645 command_runner.go:130] > Restart=on-failure
	I0830 20:25:03.563473  241645 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0830 20:25:03.563488  241645 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0830 20:25:03.563502  241645 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0830 20:25:03.563516  241645 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0830 20:25:03.563528  241645 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0830 20:25:03.563542  241645 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0830 20:25:03.563557  241645 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0830 20:25:03.563578  241645 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0830 20:25:03.563592  241645 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0830 20:25:03.563601  241645 command_runner.go:130] > ExecStart=
	I0830 20:25:03.563627  241645 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I0830 20:25:03.563647  241645 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0830 20:25:03.563658  241645 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0830 20:25:03.563671  241645 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0830 20:25:03.563681  241645 command_runner.go:130] > LimitNOFILE=infinity
	I0830 20:25:03.563687  241645 command_runner.go:130] > LimitNPROC=infinity
	I0830 20:25:03.563696  241645 command_runner.go:130] > LimitCORE=infinity
	I0830 20:25:03.563708  241645 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0830 20:25:03.563720  241645 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0830 20:25:03.563729  241645 command_runner.go:130] > TasksMax=infinity
	I0830 20:25:03.563739  241645 command_runner.go:130] > TimeoutStartSec=0
	I0830 20:25:03.563751  241645 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0830 20:25:03.563761  241645 command_runner.go:130] > Delegate=yes
	I0830 20:25:03.563774  241645 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0830 20:25:03.563783  241645 command_runner.go:130] > KillMode=process
	I0830 20:25:03.563795  241645 command_runner.go:130] > [Install]
	I0830 20:25:03.563810  241645 command_runner.go:130] > WantedBy=multi-user.target
	I0830 20:25:03.564476  241645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0830 20:25:03.576659  241645 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0830 20:25:03.592708  241645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0830 20:25:03.603906  241645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0830 20:25:03.614217  241645 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0830 20:25:03.638599  241645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0830 20:25:03.650369  241645 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0830 20:25:03.665974  241645 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0830 20:25:03.666335  241645 ssh_runner.go:195] Run: which cri-dockerd
	I0830 20:25:03.669618  241645 command_runner.go:130] > /usr/bin/cri-dockerd
	I0830 20:25:03.669860  241645 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0830 20:25:03.677395  241645 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0830 20:25:03.691827  241645 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0830 20:25:03.796524  241645 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0830 20:25:03.902931  241645 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0830 20:25:03.902965  241645 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0830 20:25:03.918652  241645 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0830 20:25:04.016625  241645 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0830 20:25:05.368771  241645 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.352099845s)
	I0830 20:25:05.368858  241645 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0830 20:25:05.466546  241645 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0830 20:25:05.576501  241645 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0830 20:25:05.684851  241645 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0830 20:25:05.794664  241645 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0830 20:25:05.811767  241645 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0830 20:25:05.914344  241645 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0830 20:25:05.984596  241645 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0830 20:25:05.984689  241645 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0830 20:25:05.990092  241645 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0830 20:25:05.990122  241645 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0830 20:25:05.990132  241645 command_runner.go:130] > Device: 16h/22d	Inode: 906         Links: 1
	I0830 20:25:05.990139  241645 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0830 20:25:05.990145  241645 command_runner.go:130] > Access: 2023-08-30 20:25:05.905320606 +0000
	I0830 20:25:05.990150  241645 command_runner.go:130] > Modify: 2023-08-30 20:25:05.905320606 +0000
	I0830 20:25:05.990154  241645 command_runner.go:130] > Change: 2023-08-30 20:25:05.907323346 +0000
	I0830 20:25:05.990158  241645 command_runner.go:130] >  Birth: -
	I0830 20:25:05.990337  241645 start.go:534] Will wait 60s for crictl version
	I0830 20:25:05.990399  241645 ssh_runner.go:195] Run: which crictl
	I0830 20:25:05.994229  241645 command_runner.go:130] > /usr/bin/crictl
	I0830 20:25:05.994314  241645 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0830 20:25:06.033459  241645 command_runner.go:130] > Version:  0.1.0
	I0830 20:25:06.033482  241645 command_runner.go:130] > RuntimeName:  docker
	I0830 20:25:06.033486  241645 command_runner.go:130] > RuntimeVersion:  24.0.5
	I0830 20:25:06.033492  241645 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0830 20:25:06.033519  241645 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.5
	RuntimeApiVersion:  v1alpha2
	I0830 20:25:06.033580  241645 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0830 20:25:06.057956  241645 command_runner.go:130] > 24.0.5
	I0830 20:25:06.058230  241645 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0830 20:25:06.083787  241645 command_runner.go:130] > 24.0.5
	I0830 20:25:06.086937  241645 out.go:204] * Preparing Kubernetes v1.28.1 on Docker 24.0.5 ...
	I0830 20:25:06.086981  241645 main.go:141] libmachine: (multinode-944570) Calling .GetIP
	I0830 20:25:06.089771  241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:25:06.090200  241645 main.go:141] libmachine: (multinode-944570) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:42:84", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:24:53 +0000 UTC Type:0 Mac:52:54:00:50:42:84 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-944570 Clientid:01:52:54:00:50:42:84}
	I0830 20:25:06.090265  241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined IP address 192.168.39.254 and MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:25:06.090492  241645 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0830 20:25:06.094327  241645 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 20:25:06.105852  241645 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0830 20:25:06.105911  241645 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0830 20:25:06.122627  241645 docker.go:636] Got preloaded images: 
	I0830 20:25:06.122653  241645 docker.go:642] registry.k8s.io/kube-apiserver:v1.28.1 wasn't preloaded
	I0830 20:25:06.122742  241645 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0830 20:25:06.131502  241645 command_runner.go:139] > {"Repositories":{}}
	I0830 20:25:06.131790  241645 ssh_runner.go:195] Run: which lz4
	I0830 20:25:06.135029  241645 command_runner.go:130] > /usr/bin/lz4
	I0830 20:25:06.135173  241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0830 20:25:06.135279  241645 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0830 20:25:06.139161  241645 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0830 20:25:06.139198  241645 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0830 20:25:06.139217  241645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (422113676 bytes)
	I0830 20:25:07.585973  241645 docker.go:600] Took 1.450719 seconds to copy over tarball
	I0830 20:25:07.586052  241645 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0830 20:25:09.823196  241645 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.237111552s)
	I0830 20:25:09.823234  241645 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0830 20:25:09.862854  241645 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0830 20:25:09.871865  241645 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.10.1":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.9-0":"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3":"sha256:73deb9a3f702532592a4167455f8
bf2e5f5d900bcc959ba2fd2d35c321de1af9"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.28.1":"sha256:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77","registry.k8s.io/kube-apiserver@sha256:f517207d13adeb50c63f9bdac2824e0e7512817eca47ac0540685771243742b2":"sha256:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.28.1":"sha256:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac","registry.k8s.io/kube-controller-manager@sha256:dda6dba8a55203ed1595efcda865a526b9282c2d9b959e9ed0a88f54a7a91195":"sha256:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.28.1":"sha256:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5","registry.k8s.io/kube-proxy@sha256:30096ad233e7bfe72662180c5ac4497f732346d6d25b7c1f1c0c7cb1a1e7e41c":"sha256:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5
ade845b500bba5"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.28.1":"sha256:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a","registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4":"sha256:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0830 20:25:09.872029  241645 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0830 20:25:09.886945  241645 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0830 20:25:09.989511  241645 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0830 20:25:14.405230  241645 ssh_runner.go:235] Completed: sudo systemctl restart docker: (4.415676095s)
	I0830 20:25:14.405322  241645 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0830 20:25:14.422787  241645 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.1
	I0830 20:25:14.422809  241645 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.1
	I0830 20:25:14.422815  241645 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.1
	I0830 20:25:14.422827  241645 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.1
	I0830 20:25:14.422831  241645 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0830 20:25:14.422836  241645 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0830 20:25:14.422840  241645 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0830 20:25:14.422845  241645 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 20:25:14.423955  241645 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.1
	registry.k8s.io/kube-proxy:v1.28.1
	registry.k8s.io/kube-controller-manager:v1.28.1
	registry.k8s.io/kube-scheduler:v1.28.1
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0830 20:25:14.423981  241645 cache_images.go:84] Images are preloaded, skipping loading
	I0830 20:25:14.424035  241645 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0830 20:25:14.448889  241645 command_runner.go:130] > cgroupfs
	I0830 20:25:14.449160  241645 cni.go:84] Creating CNI manager for ""
	I0830 20:25:14.449186  241645 cni.go:136] 1 nodes found, recommending kindnet
	I0830 20:25:14.449212  241645 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0830 20:25:14.449243  241645 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.254 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-944570 NodeName:multinode-944570 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.254"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.254 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0830 20:25:14.449461  241645 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.254
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-944570"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.254
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.254"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0830 20:25:14.449567  241645 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-944570 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.254
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:multinode-944570 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0830 20:25:14.449633  241645 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0830 20:25:14.458160  241645 command_runner.go:130] > kubeadm
	I0830 20:25:14.458178  241645 command_runner.go:130] > kubectl
	I0830 20:25:14.458182  241645 command_runner.go:130] > kubelet
	I0830 20:25:14.458202  241645 binaries.go:44] Found k8s binaries, skipping transfer
	I0830 20:25:14.458278  241645 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0830 20:25:14.466023  241645 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0830 20:25:14.480270  241645 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0830 20:25:14.494299  241645 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0830 20:25:14.508858  241645 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0830 20:25:14.512350  241645 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 20:25:14.523664  241645 certs.go:56] Setting up /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570 for IP: 192.168.39.254
	I0830 20:25:14.523700  241645 certs.go:190] acquiring lock for shared ca certs: {Name:mk1ac5fe312bfdaa0e7afaffac50c875afeaeaed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 20:25:14.523876  241645 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17145-222139/.minikube/ca.key
	I0830 20:25:14.523917  241645 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17145-222139/.minikube/proxy-client-ca.key
	I0830 20:25:14.523955  241645 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/client.key
	I0830 20:25:14.523971  241645 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/client.crt with IP's: []
	I0830 20:25:14.604845  241645 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/client.crt ...
	I0830 20:25:14.604876  241645 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/client.crt: {Name:mk3a81bce3b329f75a188d0b1d2532a803bc802a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 20:25:14.605076  241645 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/client.key ...
	I0830 20:25:14.605091  241645 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/client.key: {Name:mk274cc8b2182f52eba6fef4283857d540e33f32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 20:25:14.605185  241645 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/apiserver.key.9e1cae77
	I0830 20:25:14.605200  241645 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/apiserver.crt.9e1cae77 with IP's: [192.168.39.254 10.96.0.1 127.0.0.1 10.0.0.1]
	I0830 20:25:14.697642  241645 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/apiserver.crt.9e1cae77 ...
	I0830 20:25:14.697675  241645 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/apiserver.crt.9e1cae77: {Name:mk8de6e98fe5500c86a02985d64d4574319c01c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 20:25:14.697886  241645 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/apiserver.key.9e1cae77 ...
	I0830 20:25:14.697902  241645 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/apiserver.key.9e1cae77: {Name:mk009aec9a9754bad8f4b6865632165d91d2d16f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 20:25:14.697991  241645 certs.go:337] copying /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/apiserver.crt.9e1cae77 -> /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/apiserver.crt
	I0830 20:25:14.698061  241645 certs.go:341] copying /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/apiserver.key.9e1cae77 -> /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/apiserver.key
	I0830 20:25:14.698118  241645 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/proxy-client.key
	I0830 20:25:14.698131  241645 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/proxy-client.crt with IP's: []
	I0830 20:25:14.888701  241645 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/proxy-client.crt ...
	I0830 20:25:14.888734  241645 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/proxy-client.crt: {Name:mk023f547b5553e72f5c740f1d18b5133c723004 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 20:25:14.888909  241645 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/proxy-client.key ...
	I0830 20:25:14.888920  241645 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/proxy-client.key: {Name:mkd9897d466f60278c59be0457ce47ee40541bcd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 20:25:14.888988  241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0830 20:25:14.889005  241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0830 20:25:14.889016  241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0830 20:25:14.889028  241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0830 20:25:14.889041  241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0830 20:25:14.889052  241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0830 20:25:14.889065  241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0830 20:25:14.889077  241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0830 20:25:14.889126  241645 certs.go:437] found cert: /home/jenkins/minikube-integration/17145-222139/.minikube/certs/home/jenkins/minikube-integration/17145-222139/.minikube/certs/229347.pem (1338 bytes)
	W0830 20:25:14.889163  241645 certs.go:433] ignoring /home/jenkins/minikube-integration/17145-222139/.minikube/certs/home/jenkins/minikube-integration/17145-222139/.minikube/certs/229347_empty.pem, impossibly tiny 0 bytes
	I0830 20:25:14.889171  241645 certs.go:437] found cert: /home/jenkins/minikube-integration/17145-222139/.minikube/certs/home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca-key.pem (1679 bytes)
	I0830 20:25:14.889197  241645 certs.go:437] found cert: /home/jenkins/minikube-integration/17145-222139/.minikube/certs/home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca.pem (1082 bytes)
	I0830 20:25:14.889220  241645 certs.go:437] found cert: /home/jenkins/minikube-integration/17145-222139/.minikube/certs/home/jenkins/minikube-integration/17145-222139/.minikube/certs/cert.pem (1123 bytes)
	I0830 20:25:14.889243  241645 certs.go:437] found cert: /home/jenkins/minikube-integration/17145-222139/.minikube/certs/home/jenkins/minikube-integration/17145-222139/.minikube/certs/key.pem (1675 bytes)
	I0830 20:25:14.889279  241645 certs.go:437] found cert: /home/jenkins/minikube-integration/17145-222139/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17145-222139/.minikube/files/etc/ssl/certs/2293472.pem (1708 bytes)
	I0830 20:25:14.889303  241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/certs/229347.pem -> /usr/share/ca-certificates/229347.pem
	I0830 20:25:14.889316  241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/files/etc/ssl/certs/2293472.pem -> /usr/share/ca-certificates/2293472.pem
	I0830 20:25:14.889329  241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0830 20:25:14.889883  241645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0830 20:25:14.912136  241645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0830 20:25:14.932813  241645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0830 20:25:14.955446  241645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0830 20:25:14.976550  241645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0830 20:25:14.996785  241645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0830 20:25:15.017343  241645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0830 20:25:15.038584  241645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0830 20:25:15.059781  241645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/certs/229347.pem --> /usr/share/ca-certificates/229347.pem (1338 bytes)
	I0830 20:25:15.080315  241645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/files/etc/ssl/certs/2293472.pem --> /usr/share/ca-certificates/2293472.pem (1708 bytes)
	I0830 20:25:15.101293  241645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0830 20:25:15.121688  241645 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0830 20:25:15.135274  241645 ssh_runner.go:195] Run: openssl version
	I0830 20:25:15.140405  241645 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0830 20:25:15.140482  241645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2293472.pem && ln -fs /usr/share/ca-certificates/2293472.pem /etc/ssl/certs/2293472.pem"
	I0830 20:25:15.149304  241645 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2293472.pem
	I0830 20:25:15.153204  241645 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 30 20:12 /usr/share/ca-certificates/2293472.pem
	I0830 20:25:15.153401  241645 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 30 20:12 /usr/share/ca-certificates/2293472.pem
	I0830 20:25:15.153451  241645 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2293472.pem
	I0830 20:25:15.158234  241645 command_runner.go:130] > 3ec20f2e
	I0830 20:25:15.158403  241645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2293472.pem /etc/ssl/certs/3ec20f2e.0"
	I0830 20:25:15.167474  241645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0830 20:25:15.176508  241645 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0830 20:25:15.180617  241645 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 30 20:06 /usr/share/ca-certificates/minikubeCA.pem
	I0830 20:25:15.180780  241645 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 30 20:06 /usr/share/ca-certificates/minikubeCA.pem
	I0830 20:25:15.180825  241645 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0830 20:25:15.185637  241645 command_runner.go:130] > b5213941
	I0830 20:25:15.185862  241645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0830 20:25:15.194768  241645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/229347.pem && ln -fs /usr/share/ca-certificates/229347.pem /etc/ssl/certs/229347.pem"
	I0830 20:25:15.203890  241645 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/229347.pem
	I0830 20:25:15.207968  241645 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 30 20:12 /usr/share/ca-certificates/229347.pem
	I0830 20:25:15.208108  241645 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 30 20:12 /usr/share/ca-certificates/229347.pem
	I0830 20:25:15.208147  241645 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/229347.pem
	I0830 20:25:15.212830  241645 command_runner.go:130] > 51391683
	I0830 20:25:15.213037  241645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/229347.pem /etc/ssl/certs/51391683.0"
	I0830 20:25:15.221723  241645 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0830 20:25:15.225370  241645 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0830 20:25:15.225403  241645 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0830 20:25:15.225458  241645 kubeadm.go:404] StartCluster: {Name:multinode-944570 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.1 ClusterName:multinode-944570 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.254 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 20:25:15.225584  241645 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0830 20:25:15.241509  241645 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0830 20:25:15.249423  241645 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0830 20:25:15.249447  241645 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0830 20:25:15.249457  241645 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0830 20:25:15.249527  241645 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0830 20:25:15.257391  241645 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0830 20:25:15.265194  241645 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0830 20:25:15.265221  241645 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0830 20:25:15.265232  241645 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0830 20:25:15.265245  241645 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0830 20:25:15.265283  241645 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0830 20:25:15.265324  241645 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0830 20:25:15.592806  241645 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0830 20:25:15.592841  241645 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0830 20:25:25.871537  241645 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0830 20:25:25.871568  241645 command_runner.go:130] > [init] Using Kubernetes version: v1.28.1
	I0830 20:25:25.871617  241645 kubeadm.go:322] [preflight] Running pre-flight checks
	I0830 20:25:25.871630  241645 command_runner.go:130] > [preflight] Running pre-flight checks
	I0830 20:25:25.871714  241645 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0830 20:25:25.871725  241645 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0830 20:25:25.871844  241645 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0830 20:25:25.871877  241645 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0830 20:25:25.872045  241645 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0830 20:25:25.872069  241645 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0830 20:25:25.872170  241645 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0830 20:25:25.872192  241645 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0830 20:25:25.874013  241645 out.go:204]   - Generating certificates and keys ...
	I0830 20:25:25.874110  241645 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0830 20:25:25.874122  241645 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0830 20:25:25.874220  241645 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0830 20:25:25.874238  241645 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0830 20:25:25.874338  241645 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0830 20:25:25.874347  241645 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0830 20:25:25.874422  241645 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0830 20:25:25.874430  241645 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0830 20:25:25.874514  241645 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0830 20:25:25.874524  241645 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0830 20:25:25.874593  241645 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0830 20:25:25.874604  241645 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0830 20:25:25.874699  241645 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0830 20:25:25.874715  241645 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0830 20:25:25.874881  241645 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-944570] and IPs [192.168.39.254 127.0.0.1 ::1]
	I0830 20:25:25.874897  241645 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-944570] and IPs [192.168.39.254 127.0.0.1 ::1]
	I0830 20:25:25.874968  241645 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0830 20:25:25.874975  241645 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0830 20:25:25.875140  241645 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-944570] and IPs [192.168.39.254 127.0.0.1 ::1]
	I0830 20:25:25.875154  241645 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-944570] and IPs [192.168.39.254 127.0.0.1 ::1]
	I0830 20:25:25.875249  241645 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0830 20:25:25.875260  241645 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0830 20:25:25.875354  241645 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0830 20:25:25.875376  241645 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0830 20:25:25.875440  241645 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0830 20:25:25.875448  241645 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0830 20:25:25.875527  241645 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0830 20:25:25.875536  241645 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0830 20:25:25.875624  241645 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0830 20:25:25.875635  241645 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0830 20:25:25.875711  241645 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0830 20:25:25.875722  241645 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0830 20:25:25.875799  241645 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0830 20:25:25.875801  241645 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0830 20:25:25.875883  241645 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0830 20:25:25.875891  241645 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0830 20:25:25.875989  241645 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0830 20:25:25.875998  241645 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0830 20:25:25.876082  241645 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0830 20:25:25.876092  241645 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0830 20:25:25.877841  241645 out.go:204]   - Booting up control plane ...
	I0830 20:25:25.877961  241645 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0830 20:25:25.877968  241645 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0830 20:25:25.878098  241645 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0830 20:25:25.878118  241645 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0830 20:25:25.878223  241645 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0830 20:25:25.878235  241645 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0830 20:25:25.878345  241645 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0830 20:25:25.878357  241645 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0830 20:25:25.878465  241645 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0830 20:25:25.878473  241645 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0830 20:25:25.878505  241645 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0830 20:25:25.878510  241645 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0830 20:25:25.878636  241645 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0830 20:25:25.878641  241645 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0830 20:25:25.878712  241645 command_runner.go:130] > [apiclient] All control plane components are healthy after 6.504601 seconds
	I0830 20:25:25.878718  241645 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.504601 seconds
	I0830 20:25:25.878906  241645 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0830 20:25:25.878920  241645 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0830 20:25:25.879034  241645 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0830 20:25:25.879041  241645 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0830 20:25:25.879087  241645 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0830 20:25:25.879093  241645 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0830 20:25:25.879275  241645 command_runner.go:130] > [mark-control-plane] Marking the node multinode-944570 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0830 20:25:25.879285  241645 kubeadm.go:322] [mark-control-plane] Marking the node multinode-944570 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0830 20:25:25.879369  241645 command_runner.go:130] > [bootstrap-token] Using token: 0rk0ip.77qqtwy1kihykz5t
	I0830 20:25:25.879383  241645 kubeadm.go:322] [bootstrap-token] Using token: 0rk0ip.77qqtwy1kihykz5t
	I0830 20:25:25.881082  241645 out.go:204]   - Configuring RBAC rules ...
	I0830 20:25:25.881213  241645 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0830 20:25:25.881226  241645 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0830 20:25:25.881326  241645 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0830 20:25:25.881339  241645 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0830 20:25:25.881489  241645 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0830 20:25:25.881497  241645 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0830 20:25:25.881635  241645 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0830 20:25:25.881643  241645 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0830 20:25:25.881817  241645 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0830 20:25:25.881833  241645 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0830 20:25:25.881942  241645 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0830 20:25:25.881962  241645 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0830 20:25:25.882094  241645 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0830 20:25:25.882102  241645 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0830 20:25:25.882161  241645 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0830 20:25:25.882185  241645 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0830 20:25:25.882283  241645 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0830 20:25:25.882290  241645 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0830 20:25:25.882296  241645 kubeadm.go:322] 
	I0830 20:25:25.882374  241645 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0830 20:25:25.882383  241645 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0830 20:25:25.882392  241645 kubeadm.go:322] 
	I0830 20:25:25.882520  241645 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0830 20:25:25.882538  241645 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0830 20:25:25.882544  241645 kubeadm.go:322] 
	I0830 20:25:25.882576  241645 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0830 20:25:25.882586  241645 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0830 20:25:25.882671  241645 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0830 20:25:25.882677  241645 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0830 20:25:25.882748  241645 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0830 20:25:25.882761  241645 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0830 20:25:25.882776  241645 kubeadm.go:322] 
	I0830 20:25:25.882850  241645 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0830 20:25:25.882862  241645 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0830 20:25:25.882869  241645 kubeadm.go:322] 
	I0830 20:25:25.882950  241645 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0830 20:25:25.882959  241645 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0830 20:25:25.882969  241645 kubeadm.go:322] 
	I0830 20:25:25.883041  241645 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0830 20:25:25.883058  241645 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0830 20:25:25.883150  241645 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0830 20:25:25.883158  241645 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0830 20:25:25.883263  241645 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0830 20:25:25.883276  241645 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0830 20:25:25.883283  241645 kubeadm.go:322] 
	I0830 20:25:25.883409  241645 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0830 20:25:25.883423  241645 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0830 20:25:25.883515  241645 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0830 20:25:25.883530  241645 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0830 20:25:25.883546  241645 kubeadm.go:322] 
	I0830 20:25:25.883664  241645 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 0rk0ip.77qqtwy1kihykz5t \
	I0830 20:25:25.883674  241645 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 0rk0ip.77qqtwy1kihykz5t \
	I0830 20:25:25.883818  241645 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:c7d83bf61acd2074da49416c1394017fb833fac06001902ce7698890024b9ad6 \
	I0830 20:25:25.883827  241645 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c7d83bf61acd2074da49416c1394017fb833fac06001902ce7698890024b9ad6 \
	I0830 20:25:25.883854  241645 command_runner.go:130] > 	--control-plane 
	I0830 20:25:25.883861  241645 kubeadm.go:322] 	--control-plane 
	I0830 20:25:25.883870  241645 kubeadm.go:322] 
	I0830 20:25:25.883981  241645 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0830 20:25:25.883990  241645 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0830 20:25:25.883996  241645 kubeadm.go:322] 
	I0830 20:25:25.884115  241645 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 0rk0ip.77qqtwy1kihykz5t \
	I0830 20:25:25.884125  241645 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 0rk0ip.77qqtwy1kihykz5t \
	I0830 20:25:25.884249  241645 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:c7d83bf61acd2074da49416c1394017fb833fac06001902ce7698890024b9ad6 
	I0830 20:25:25.884275  241645 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c7d83bf61acd2074da49416c1394017fb833fac06001902ce7698890024b9ad6 
	I0830 20:25:25.884288  241645 cni.go:84] Creating CNI manager for ""
	I0830 20:25:25.884307  241645 cni.go:136] 1 nodes found, recommending kindnet
	I0830 20:25:25.886028  241645 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0830 20:25:25.887290  241645 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0830 20:25:25.894367  241645 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0830 20:25:25.894392  241645 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0830 20:25:25.894407  241645 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0830 20:25:25.894419  241645 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0830 20:25:25.894429  241645 command_runner.go:130] > Access: 2023-08-30 20:24:50.661585107 +0000
	I0830 20:25:25.894441  241645 command_runner.go:130] > Modify: 2023-08-24 15:47:28.000000000 +0000
	I0830 20:25:25.894452  241645 command_runner.go:130] > Change: 2023-08-30 20:24:48.918585107 +0000
	I0830 20:25:25.894460  241645 command_runner.go:130] >  Birth: -
	I0830 20:25:25.895015  241645 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
	I0830 20:25:25.895031  241645 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0830 20:25:25.946468  241645 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0830 20:25:27.048421  241645 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0830 20:25:27.054294  241645 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0830 20:25:27.062899  241645 command_runner.go:130] > serviceaccount/kindnet created
	I0830 20:25:27.079585  241645 command_runner.go:130] > daemonset.apps/kindnet created
	I0830 20:25:27.082436  241645 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.135933063s)
	I0830 20:25:27.082479  241645 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0830 20:25:27.082588  241645 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 20:25:27.082613  241645 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=d7e60a4db8510b81002db541520f138fed781588 minikube.k8s.io/name=multinode-944570 minikube.k8s.io/updated_at=2023_08_30T20_25_27_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 20:25:27.103906  241645 command_runner.go:130] > -16
	I0830 20:25:27.104140  241645 ops.go:34] apiserver oom_adj: -16
	I0830 20:25:27.259503  241645 command_runner.go:130] > node/multinode-944570 labeled
	I0830 20:25:27.304084  241645 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0830 20:25:27.304257  241645 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 20:25:27.408503  241645 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 20:25:27.408688  241645 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 20:25:27.505253  241645 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 20:25:28.007756  241645 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 20:25:28.114870  241645 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 20:25:28.507452  241645 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 20:25:28.596045  241645 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 20:25:29.007855  241645 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 20:25:29.094730  241645 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 20:25:29.507317  241645 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 20:25:29.582437  241645 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 20:25:30.007615  241645 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 20:25:30.099287  241645 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 20:25:30.508014  241645 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 20:25:30.598144  241645 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 20:25:31.007849  241645 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 20:25:31.096085  241645 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 20:25:31.507378  241645 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 20:25:31.607568  241645 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 20:25:32.007769  241645 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 20:25:32.095126  241645 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 20:25:32.507951  241645 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 20:25:32.600656  241645 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 20:25:33.007821  241645 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 20:25:33.089474  241645 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 20:25:33.507716  241645 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 20:25:33.584616  241645 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 20:25:34.008071  241645 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 20:25:34.104451  241645 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 20:25:34.508111  241645 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 20:25:34.600003  241645 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 20:25:35.007552  241645 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 20:25:35.096483  241645 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 20:25:35.507638  241645 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 20:25:35.596067  241645 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 20:25:36.007719  241645 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 20:25:36.096663  241645 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 20:25:36.507181  241645 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 20:25:36.600236  241645 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 20:25:37.007888  241645 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 20:25:37.102770  241645 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 20:25:37.508097  241645 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 20:25:37.605543  241645 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 20:25:38.007169  241645 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 20:25:38.099486  241645 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 20:25:38.508190  241645 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 20:25:38.727754  241645 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 20:25:39.007195  241645 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 20:25:39.110641  241645 command_runner.go:130] > NAME      SECRETS   AGE
	I0830 20:25:39.110674  241645 command_runner.go:130] > default   0         1s
	I0830 20:25:39.112179  241645 kubeadm.go:1081] duration metric: took 12.029668645s to wait for elevateKubeSystemPrivileges.
	I0830 20:25:39.112214  241645 kubeadm.go:406] StartCluster complete in 23.886760086s
	I0830 20:25:39.112250  241645 settings.go:142] acquiring lock: {Name:mke973357c023e3c9107f2946103c543213b72a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 20:25:39.112344  241645 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17145-222139/kubeconfig
	I0830 20:25:39.112996  241645 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17145-222139/kubeconfig: {Name:mke2c13974c9c1f627b1ef76f3c4bc0d9584894b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 20:25:39.113277  241645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0830 20:25:39.113367  241645 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0830 20:25:39.113492  241645 config.go:182] Loaded profile config "multinode-944570": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0830 20:25:39.113501  241645 addons.go:69] Setting default-storageclass=true in profile "multinode-944570"
	I0830 20:25:39.113518  241645 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-944570"
	I0830 20:25:39.113494  241645 addons.go:69] Setting storage-provisioner=true in profile "multinode-944570"
	I0830 20:25:39.113548  241645 addons.go:231] Setting addon storage-provisioner=true in "multinode-944570"
	I0830 20:25:39.113585  241645 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17145-222139/kubeconfig
	I0830 20:25:39.113610  241645 host.go:66] Checking if "multinode-944570" exists ...
	I0830 20:25:39.113910  241645 kapi.go:59] client config for multinode-944570: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/client.crt", KeyFile:"/home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/client.key", CAFile:"/home/jenkins/minikube-integration/17145-222139/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d63c60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0830 20:25:39.114028  241645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0830 20:25:39.114059  241645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 20:25:39.114095  241645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0830 20:25:39.114148  241645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 20:25:39.114855  241645 cert_rotation.go:137] Starting client certificate rotation controller
	I0830 20:25:39.115235  241645 round_trippers.go:463] GET https://192.168.39.254:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0830 20:25:39.115252  241645 round_trippers.go:469] Request Headers:
	I0830 20:25:39.115263  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:25:39.115272  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:25:39.126389  241645 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0830 20:25:39.126416  241645 round_trippers.go:577] Response Headers:
	I0830 20:25:39.126428  241645 round_trippers.go:580]     Content-Length: 291
	I0830 20:25:39.126437  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:25:39 GMT
	I0830 20:25:39.126445  241645 round_trippers.go:580]     Audit-Id: 7831cf68-a71e-460d-96a3-2487259424d4
	I0830 20:25:39.126454  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:25:39.126463  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:25:39.126472  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:25:39.126485  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:25:39.126515  241645 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c7c6dc1f-7aa3-4c43-b4bf-a6ffaa653be3","resourceVersion":"390","creationTimestamp":"2023-08-30T20:25:25Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0830 20:25:39.127043  241645 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c7c6dc1f-7aa3-4c43-b4bf-a6ffaa653be3","resourceVersion":"390","creationTimestamp":"2023-08-30T20:25:25Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0830 20:25:39.127122  241645 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0830 20:25:39.127136  241645 round_trippers.go:469] Request Headers:
	I0830 20:25:39.127147  241645 round_trippers.go:473]     Content-Type: application/json
	I0830 20:25:39.127160  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:25:39.127174  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:25:39.129430  241645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44359
	I0830 20:25:39.129729  241645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41875
	I0830 20:25:39.129931  241645 main.go:141] libmachine: () Calling .GetVersion
	I0830 20:25:39.130086  241645 main.go:141] libmachine: () Calling .GetVersion
	I0830 20:25:39.130495  241645 main.go:141] libmachine: Using API Version  1
	I0830 20:25:39.130513  241645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 20:25:39.130541  241645 main.go:141] libmachine: Using API Version  1
	I0830 20:25:39.130563  241645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 20:25:39.130863  241645 main.go:141] libmachine: () Calling .GetMachineName
	I0830 20:25:39.130979  241645 main.go:141] libmachine: () Calling .GetMachineName
	I0830 20:25:39.131179  241645 main.go:141] libmachine: (multinode-944570) Calling .GetState
	I0830 20:25:39.131427  241645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0830 20:25:39.131459  241645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 20:25:39.133409  241645 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17145-222139/kubeconfig
	I0830 20:25:39.133768  241645 kapi.go:59] client config for multinode-944570: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/client.crt", KeyFile:"/home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/client.key", CAFile:"/home/jenkins/minikube-integration/17145-222139/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d63c60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0830 20:25:39.134176  241645 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0830 20:25:39.134201  241645 round_trippers.go:469] Request Headers:
	I0830 20:25:39.134211  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:25:39.134222  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:25:39.139922  241645 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0830 20:25:39.139941  241645 round_trippers.go:577] Response Headers:
	I0830 20:25:39.139948  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:25:39.139953  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:25:39.139959  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:25:39.139965  241645 round_trippers.go:580]     Content-Length: 109
	I0830 20:25:39.139973  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:25:39 GMT
	I0830 20:25:39.139985  241645 round_trippers.go:580]     Audit-Id: fff28a6e-3b77-4075-b7c6-485127c1d06c
	I0830 20:25:39.139997  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:25:39.140017  241645 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"391"},"items":[]}
	I0830 20:25:39.140142  241645 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0830 20:25:39.140160  241645 round_trippers.go:577] Response Headers:
	I0830 20:25:39.140169  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:25:39 GMT
	I0830 20:25:39.140181  241645 round_trippers.go:580]     Audit-Id: d6200b84-e318-4e9e-b6e1-c2c3e782b8cf
	I0830 20:25:39.140192  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:25:39.140203  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:25:39.140214  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:25:39.140222  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:25:39.140229  241645 round_trippers.go:580]     Content-Length: 291
	I0830 20:25:39.140255  241645 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c7c6dc1f-7aa3-4c43-b4bf-a6ffaa653be3","resourceVersion":"391","creationTimestamp":"2023-08-30T20:25:25Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0830 20:25:39.140323  241645 addons.go:231] Setting addon default-storageclass=true in "multinode-944570"
	I0830 20:25:39.140367  241645 host.go:66] Checking if "multinode-944570" exists ...
	I0830 20:25:39.140399  241645 round_trippers.go:463] GET https://192.168.39.254:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0830 20:25:39.140411  241645 round_trippers.go:469] Request Headers:
	I0830 20:25:39.140421  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:25:39.140433  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:25:39.140704  241645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0830 20:25:39.140733  241645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 20:25:39.145101  241645 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0830 20:25:39.145124  241645 round_trippers.go:577] Response Headers:
	I0830 20:25:39.145133  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:25:39.145139  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:25:39.145147  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:25:39.145155  241645 round_trippers.go:580]     Content-Length: 291
	I0830 20:25:39.145163  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:25:39 GMT
	I0830 20:25:39.145176  241645 round_trippers.go:580]     Audit-Id: 6944d652-b884-4106-8138-53b87fe4c71f
	I0830 20:25:39.145189  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:25:39.145211  241645 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c7c6dc1f-7aa3-4c43-b4bf-a6ffaa653be3","resourceVersion":"391","creationTimestamp":"2023-08-30T20:25:25Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0830 20:25:39.145307  241645 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-944570" context rescaled to 1 replicas
	I0830 20:25:39.145338  241645 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.254 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0830 20:25:39.148342  241645 out.go:177] * Verifying Kubernetes components...
	I0830 20:25:39.146942  241645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37995
	I0830 20:25:39.148851  241645 main.go:141] libmachine: () Calling .GetVersion
	I0830 20:25:39.150360  241645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 20:25:39.150915  241645 main.go:141] libmachine: Using API Version  1
	I0830 20:25:39.150949  241645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 20:25:39.151317  241645 main.go:141] libmachine: () Calling .GetMachineName
	I0830 20:25:39.151599  241645 main.go:141] libmachine: (multinode-944570) Calling .GetState
	I0830 20:25:39.153583  241645 main.go:141] libmachine: (multinode-944570) Calling .DriverName
	I0830 20:25:39.155405  241645 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 20:25:39.156868  241645 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0830 20:25:39.156887  241645 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0830 20:25:39.156901  241645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35409
	I0830 20:25:39.156909  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHHostname
	I0830 20:25:39.157298  241645 main.go:141] libmachine: () Calling .GetVersion
	I0830 20:25:39.157864  241645 main.go:141] libmachine: Using API Version  1
	I0830 20:25:39.157890  241645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 20:25:39.158284  241645 main.go:141] libmachine: () Calling .GetMachineName
	I0830 20:25:39.158821  241645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0830 20:25:39.158860  241645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 20:25:39.160166  241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:25:39.160598  241645 main.go:141] libmachine: (multinode-944570) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:42:84", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:24:53 +0000 UTC Type:0 Mac:52:54:00:50:42:84 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-944570 Clientid:01:52:54:00:50:42:84}
	I0830 20:25:39.160633  241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined IP address 192.168.39.254 and MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:25:39.160818  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHPort
	I0830 20:25:39.161011  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHKeyPath
	I0830 20:25:39.161181  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHUsername
	I0830 20:25:39.161322  241645 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570/id_rsa Username:docker}
	I0830 20:25:39.179849  241645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44619
	I0830 20:25:39.180311  241645 main.go:141] libmachine: () Calling .GetVersion
	I0830 20:25:39.180972  241645 main.go:141] libmachine: Using API Version  1
	I0830 20:25:39.181003  241645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 20:25:39.181364  241645 main.go:141] libmachine: () Calling .GetMachineName
	I0830 20:25:39.181538  241645 main.go:141] libmachine: (multinode-944570) Calling .GetState
	I0830 20:25:39.183329  241645 main.go:141] libmachine: (multinode-944570) Calling .DriverName
	I0830 20:25:39.183603  241645 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0830 20:25:39.183620  241645 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0830 20:25:39.183643  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHHostname
	I0830 20:25:39.186228  241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:25:39.186623  241645 main.go:141] libmachine: (multinode-944570) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:42:84", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:24:53 +0000 UTC Type:0 Mac:52:54:00:50:42:84 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-944570 Clientid:01:52:54:00:50:42:84}
	I0830 20:25:39.186656  241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined IP address 192.168.39.254 and MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:25:39.186838  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHPort
	I0830 20:25:39.187039  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHKeyPath
	I0830 20:25:39.187238  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHUsername
	I0830 20:25:39.187417  241645 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570/id_rsa Username:docker}
	I0830 20:25:39.391096  241645 command_runner.go:130] > apiVersion: v1
	I0830 20:25:39.391118  241645 command_runner.go:130] > data:
	I0830 20:25:39.391122  241645 command_runner.go:130] >   Corefile: |
	I0830 20:25:39.391127  241645 command_runner.go:130] >     .:53 {
	I0830 20:25:39.391131  241645 command_runner.go:130] >         errors
	I0830 20:25:39.391136  241645 command_runner.go:130] >         health {
	I0830 20:25:39.391141  241645 command_runner.go:130] >            lameduck 5s
	I0830 20:25:39.391144  241645 command_runner.go:130] >         }
	I0830 20:25:39.391148  241645 command_runner.go:130] >         ready
	I0830 20:25:39.391160  241645 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0830 20:25:39.391164  241645 command_runner.go:130] >            pods insecure
	I0830 20:25:39.391169  241645 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0830 20:25:39.391174  241645 command_runner.go:130] >            ttl 30
	I0830 20:25:39.391177  241645 command_runner.go:130] >         }
	I0830 20:25:39.391181  241645 command_runner.go:130] >         prometheus :9153
	I0830 20:25:39.391186  241645 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0830 20:25:39.391190  241645 command_runner.go:130] >            max_concurrent 1000
	I0830 20:25:39.391194  241645 command_runner.go:130] >         }
	I0830 20:25:39.391198  241645 command_runner.go:130] >         cache 30
	I0830 20:25:39.391201  241645 command_runner.go:130] >         loop
	I0830 20:25:39.391205  241645 command_runner.go:130] >         reload
	I0830 20:25:39.391211  241645 command_runner.go:130] >         loadbalance
	I0830 20:25:39.391215  241645 command_runner.go:130] >     }
	I0830 20:25:39.391220  241645 command_runner.go:130] > kind: ConfigMap
	I0830 20:25:39.391228  241645 command_runner.go:130] > metadata:
	I0830 20:25:39.391234  241645 command_runner.go:130] >   creationTimestamp: "2023-08-30T20:25:25Z"
	I0830 20:25:39.391238  241645 command_runner.go:130] >   name: coredns
	I0830 20:25:39.391244  241645 command_runner.go:130] >   namespace: kube-system
	I0830 20:25:39.391248  241645 command_runner.go:130] >   resourceVersion: "266"
	I0830 20:25:39.391253  241645 command_runner.go:130] >   uid: 989d9dad-32d4-44a5-9cf9-98995b18ae7f
	I0830 20:25:39.393916  241645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0830 20:25:39.394151  241645 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17145-222139/kubeconfig
	I0830 20:25:39.394467  241645 kapi.go:59] client config for multinode-944570: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/client.crt", KeyFile:"/home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/client.key", CAFile:"/home/jenkins/minikube-integration/17145-222139/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d63c60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0830 20:25:39.394752  241645 node_ready.go:35] waiting up to 6m0s for node "multinode-944570" to be "Ready" ...
	I0830 20:25:39.394841  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
	I0830 20:25:39.394852  241645 round_trippers.go:469] Request Headers:
	I0830 20:25:39.394865  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:25:39.394878  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:25:39.397459  241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 20:25:39.397489  241645 round_trippers.go:577] Response Headers:
	I0830 20:25:39.397500  241645 round_trippers.go:580]     Audit-Id: cd6c7757-9b98-4f74-af25-e1cb9e4b7350
	I0830 20:25:39.397512  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:25:39.397521  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:25:39.397535  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:25:39.397549  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:25:39.397562  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:25:39 GMT
	I0830 20:25:39.397802  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"340","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0830 20:25:39.398624  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
	I0830 20:25:39.398644  241645 round_trippers.go:469] Request Headers:
	I0830 20:25:39.398655  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:25:39.398664  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:25:39.401200  241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 20:25:39.401222  241645 round_trippers.go:577] Response Headers:
	I0830 20:25:39.401233  241645 round_trippers.go:580]     Audit-Id: 140d938d-33c7-4dfa-9917-1b0b5f9b7c2e
	I0830 20:25:39.401243  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:25:39.401257  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:25:39.401270  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:25:39.401290  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:25:39.401298  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:25:39 GMT
	I0830 20:25:39.401406  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"340","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0830 20:25:39.421665  241645 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0830 20:25:39.487655  241645 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0830 20:25:39.902243  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
	I0830 20:25:39.902281  241645 round_trippers.go:469] Request Headers:
	I0830 20:25:39.902293  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:25:39.902302  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:25:39.904654  241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 20:25:39.904681  241645 round_trippers.go:577] Response Headers:
	I0830 20:25:39.904692  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:25:39.904808  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:25:39.904829  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:25:39.904842  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:25:39 GMT
	I0830 20:25:39.904854  241645 round_trippers.go:580]     Audit-Id: a6887700-3e28-4e04-b319-b891b9b1d69e
	I0830 20:25:39.904863  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:25:39.905020  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"340","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0830 20:25:40.284325  241645 command_runner.go:130] > configmap/coredns replaced
	I0830 20:25:40.284373  241645 start.go:901] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0830 20:25:40.402656  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
	I0830 20:25:40.402678  241645 round_trippers.go:469] Request Headers:
	I0830 20:25:40.402688  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:25:40.402694  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:25:40.405191  241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 20:25:40.405213  241645 round_trippers.go:577] Response Headers:
	I0830 20:25:40.405223  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:25:40.405230  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:25:40.405239  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:25:40.405250  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:25:40.405258  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:25:40 GMT
	I0830 20:25:40.405264  241645 round_trippers.go:580]     Audit-Id: 8439450d-968f-4ead-969c-3a4b562f1ee3
	I0830 20:25:40.405619  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"340","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0830 20:25:40.452449  241645 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0830 20:25:40.460958  241645 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0830 20:25:40.480685  241645 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0830 20:25:40.492917  241645 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0830 20:25:40.509256  241645 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0830 20:25:40.523729  241645 command_runner.go:130] > pod/storage-provisioner created
	I0830 20:25:40.527054  241645 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0830 20:25:40.527096  241645 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.039407422s)
	I0830 20:25:40.527143  241645 main.go:141] libmachine: Making call to close driver server
	I0830 20:25:40.527163  241645 main.go:141] libmachine: (multinode-944570) Calling .Close
	I0830 20:25:40.527545  241645 main.go:141] libmachine: (multinode-944570) DBG | Closing plugin on server side
	I0830 20:25:40.527596  241645 main.go:141] libmachine: Successfully made call to close driver server
	I0830 20:25:40.527611  241645 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 20:25:40.527637  241645 main.go:141] libmachine: Making call to close driver server
	I0830 20:25:40.527665  241645 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.105959238s)
	I0830 20:25:40.527715  241645 main.go:141] libmachine: Making call to close driver server
	I0830 20:25:40.527733  241645 main.go:141] libmachine: (multinode-944570) Calling .Close
	I0830 20:25:40.527673  241645 main.go:141] libmachine: (multinode-944570) Calling .Close
	I0830 20:25:40.527983  241645 main.go:141] libmachine: Successfully made call to close driver server
	I0830 20:25:40.528001  241645 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 20:25:40.528078  241645 main.go:141] libmachine: Successfully made call to close driver server
	I0830 20:25:40.528096  241645 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 20:25:40.528111  241645 main.go:141] libmachine: Making call to close driver server
	I0830 20:25:40.528124  241645 main.go:141] libmachine: (multinode-944570) Calling .Close
	I0830 20:25:40.528178  241645 main.go:141] libmachine: Making call to close driver server
	I0830 20:25:40.528188  241645 main.go:141] libmachine: (multinode-944570) Calling .Close
	I0830 20:25:40.528358  241645 main.go:141] libmachine: (multinode-944570) DBG | Closing plugin on server side
	I0830 20:25:40.528433  241645 main.go:141] libmachine: (multinode-944570) DBG | Closing plugin on server side
	I0830 20:25:40.528450  241645 main.go:141] libmachine: Successfully made call to close driver server
	I0830 20:25:40.528472  241645 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 20:25:40.528556  241645 main.go:141] libmachine: Successfully made call to close driver server
	I0830 20:25:40.528584  241645 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 20:25:40.530496  241645 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0830 20:25:40.531932  241645 addons.go:502] enable addons completed in 1.418563326s: enabled=[storage-provisioner default-storageclass]
	I0830 20:25:40.902483  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
	I0830 20:25:40.902507  241645 round_trippers.go:469] Request Headers:
	I0830 20:25:40.902516  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:25:40.902522  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:25:40.905485  241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 20:25:40.905513  241645 round_trippers.go:577] Response Headers:
	I0830 20:25:40.905523  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:25:40 GMT
	I0830 20:25:40.905533  241645 round_trippers.go:580]     Audit-Id: 6683fd26-5ef4-47ed-ae19-94ff3b0a5f4c
	I0830 20:25:40.905540  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:25:40.905548  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:25:40.905555  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:25:40.905564  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:25:40.905707  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"340","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0830 20:25:41.402269  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
	I0830 20:25:41.402295  241645 round_trippers.go:469] Request Headers:
	I0830 20:25:41.402304  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:25:41.402310  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:25:41.406012  241645 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 20:25:41.406040  241645 round_trippers.go:577] Response Headers:
	I0830 20:25:41.406049  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:25:41 GMT
	I0830 20:25:41.406055  241645 round_trippers.go:580]     Audit-Id: af02763c-2974-4778-880b-50daaa9235fe
	I0830 20:25:41.406060  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:25:41.406066  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:25:41.406071  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:25:41.406077  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:25:41.406418  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"340","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0830 20:25:41.406838  241645 node_ready.go:58] node "multinode-944570" has status "Ready":"False"
	I0830 20:25:41.902133  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
	I0830 20:25:41.902158  241645 round_trippers.go:469] Request Headers:
	I0830 20:25:41.902167  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:25:41.902175  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:25:41.904959  241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 20:25:41.904988  241645 round_trippers.go:577] Response Headers:
	I0830 20:25:41.904999  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:25:41.905008  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:25:41.905020  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:25:41 GMT
	I0830 20:25:41.905029  241645 round_trippers.go:580]     Audit-Id: 158b970e-8438-434d-8f2b-2a5319e4503d
	I0830 20:25:41.905038  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:25:41.905045  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:25:41.905338  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"340","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0830 20:25:42.401997  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
	I0830 20:25:42.402022  241645 round_trippers.go:469] Request Headers:
	I0830 20:25:42.402030  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:25:42.402036  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:25:42.404707  241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 20:25:42.404733  241645 round_trippers.go:577] Response Headers:
	I0830 20:25:42.404743  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:25:42 GMT
	I0830 20:25:42.404753  241645 round_trippers.go:580]     Audit-Id: 49b18478-d297-4d21-82bf-80723ec332bb
	I0830 20:25:42.404762  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:25:42.404773  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:25:42.404781  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:25:42.404790  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:25:42.405184  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"340","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0830 20:25:42.903000  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
	I0830 20:25:42.903032  241645 round_trippers.go:469] Request Headers:
	I0830 20:25:42.903046  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:25:42.903056  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:25:42.905834  241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 20:25:42.905854  241645 round_trippers.go:577] Response Headers:
	I0830 20:25:42.905861  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:25:42 GMT
	I0830 20:25:42.905867  241645 round_trippers.go:580]     Audit-Id: 087184ac-7340-433f-b160-4e451ac8c785
	I0830 20:25:42.905872  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:25:42.905878  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:25:42.905884  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:25:42.905894  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:25:42.906143  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"340","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0830 20:25:43.402491  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
	I0830 20:25:43.402511  241645 round_trippers.go:469] Request Headers:
	I0830 20:25:43.402519  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:25:43.402526  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:25:43.404977  241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 20:25:43.405003  241645 round_trippers.go:577] Response Headers:
	I0830 20:25:43.405014  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:25:43.405023  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:25:43.405031  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:25:43.405039  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:25:43.405048  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:25:43 GMT
	I0830 20:25:43.405055  241645 round_trippers.go:580]     Audit-Id: dc72bcf1-ab08-4a6c-9838-57f349b98460
	I0830 20:25:43.405267  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"340","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0830 20:25:43.901950  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
	I0830 20:25:43.901977  241645 round_trippers.go:469] Request Headers:
	I0830 20:25:43.901990  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:25:43.902000  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:25:43.905380  241645 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 20:25:43.905407  241645 round_trippers.go:577] Response Headers:
	I0830 20:25:43.905418  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:25:43.905427  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:25:43.905436  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:25:43.905443  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:25:43.905452  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:25:43 GMT
	I0830 20:25:43.905460  241645 round_trippers.go:580]     Audit-Id: 881598e7-6c2f-4f4a-ac62-d322414fc74d
	I0830 20:25:43.905941  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"340","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0830 20:25:43.906328  241645 node_ready.go:58] node "multinode-944570" has status "Ready":"False"
	I0830 20:25:44.402302  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
	I0830 20:25:44.402327  241645 round_trippers.go:469] Request Headers:
	I0830 20:25:44.402338  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:25:44.402354  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:25:44.405435  241645 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 20:25:44.405459  241645 round_trippers.go:577] Response Headers:
	I0830 20:25:44.405466  241645 round_trippers.go:580]     Audit-Id: bee4395a-00f5-4b47-95bc-56558d3ff7b5
	I0830 20:25:44.405472  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:25:44.405477  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:25:44.405482  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:25:44.405487  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:25:44.405493  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:25:44 GMT
	I0830 20:25:44.406047  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"340","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0830 20:25:44.902877  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
	I0830 20:25:44.902905  241645 round_trippers.go:469] Request Headers:
	I0830 20:25:44.902916  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:25:44.902925  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:25:44.905673  241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 20:25:44.905696  241645 round_trippers.go:577] Response Headers:
	I0830 20:25:44.905703  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:25:44.905709  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:25:44.905715  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:25:44.905720  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:25:44.905725  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:25:44 GMT
	I0830 20:25:44.905734  241645 round_trippers.go:580]     Audit-Id: 52ecfd25-b95a-4fca-9139-c0202a1bcf9f
	I0830 20:25:44.905897  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"340","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0830 20:25:45.402532  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
	I0830 20:25:45.402554  241645 round_trippers.go:469] Request Headers:
	I0830 20:25:45.402564  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:25:45.402572  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:25:45.405687  241645 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 20:25:45.405706  241645 round_trippers.go:577] Response Headers:
	I0830 20:25:45.405713  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:25:45.405718  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:25:45.405725  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:25:45.405734  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:25:45.405742  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:25:45 GMT
	I0830 20:25:45.405760  241645 round_trippers.go:580]     Audit-Id: 20d0e8c8-8acb-4919-9a21-fee0d25bf906
	I0830 20:25:45.406021  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"340","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0830 20:25:45.902795  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
	I0830 20:25:45.902827  241645 round_trippers.go:469] Request Headers:
	I0830 20:25:45.902840  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:25:45.902850  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:25:45.905724  241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 20:25:45.905751  241645 round_trippers.go:577] Response Headers:
	I0830 20:25:45.905759  241645 round_trippers.go:580]     Audit-Id: 7c77e3c0-7f61-4e11-a63c-d5765e8a21a1
	I0830 20:25:45.905765  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:25:45.905771  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:25:45.905776  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:25:45.905781  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:25:45.905787  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:25:45 GMT
	I0830 20:25:45.906014  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"340","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0830 20:25:45.906438  241645 node_ready.go:58] node "multinode-944570" has status "Ready":"False"
	I0830 20:25:46.402709  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
	I0830 20:25:46.402741  241645 round_trippers.go:469] Request Headers:
	I0830 20:25:46.402755  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:25:46.402763  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:25:46.405368  241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 20:25:46.405397  241645 round_trippers.go:577] Response Headers:
	I0830 20:25:46.405409  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:25:46.405418  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:25:46.405425  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:25:46.405434  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:25:46 GMT
	I0830 20:25:46.405445  241645 round_trippers.go:580]     Audit-Id: 56b7412e-dd0d-4793-906d-12925a78c023
	I0830 20:25:46.405458  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:25:46.405644  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"340","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0830 20:25:46.902308  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
	I0830 20:25:46.902333  241645 round_trippers.go:469] Request Headers:
	I0830 20:25:46.902342  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:25:46.902348  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:25:46.905209  241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 20:25:46.905232  241645 round_trippers.go:577] Response Headers:
	I0830 20:25:46.905244  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:25:46.905252  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:25:46.905259  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:25:46.905267  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:25:46 GMT
	I0830 20:25:46.905275  241645 round_trippers.go:580]     Audit-Id: 6dcd4b7f-f32a-4b09-9689-8bb582fd47fd
	I0830 20:25:46.905284  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:25:46.905393  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"340","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0830 20:25:47.402231  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
	I0830 20:25:47.402280  241645 round_trippers.go:469] Request Headers:
	I0830 20:25:47.402293  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:25:47.402303  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:25:47.404651  241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 20:25:47.404673  241645 round_trippers.go:577] Response Headers:
	I0830 20:25:47.404686  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:25:47.404695  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:25:47.404702  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:25:47.404717  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:25:47.404726  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:25:47 GMT
	I0830 20:25:47.404736  241645 round_trippers.go:580]     Audit-Id: daae3920-9c7b-4f62-885f-1a22a2b62d51
	I0830 20:25:47.404924  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"340","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0830 20:25:47.902670  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
	I0830 20:25:47.902695  241645 round_trippers.go:469] Request Headers:
	I0830 20:25:47.902703  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:25:47.902710  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:25:47.905509  241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 20:25:47.905527  241645 round_trippers.go:577] Response Headers:
	I0830 20:25:47.905535  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:25:47.905541  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:25:47.905546  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:25:47 GMT
	I0830 20:25:47.905551  241645 round_trippers.go:580]     Audit-Id: 1b3e5548-03f2-4db0-ad65-32c2f03a7955
	I0830 20:25:47.905557  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:25:47.905566  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:25:47.905913  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"340","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0830 20:25:48.402536  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
	I0830 20:25:48.402561  241645 round_trippers.go:469] Request Headers:
	I0830 20:25:48.402570  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:25:48.402576  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:25:48.405230  241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 20:25:48.405273  241645 round_trippers.go:577] Response Headers:
	I0830 20:25:48.405285  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:25:48.405298  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:25:48.405311  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:25:48.405321  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:25:48.405334  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:25:48 GMT
	I0830 20:25:48.405344  241645 round_trippers.go:580]     Audit-Id: 666498f3-61c6-45f3-8ba8-2f142f7e0d16
	I0830 20:25:48.405496  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"340","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0830 20:25:48.405794  241645 node_ready.go:58] node "multinode-944570" has status "Ready":"False"
	I0830 20:25:48.902156  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
	I0830 20:25:48.902177  241645 round_trippers.go:469] Request Headers:
	I0830 20:25:48.902190  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:25:48.902196  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:25:48.904770  241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 20:25:48.904792  241645 round_trippers.go:577] Response Headers:
	I0830 20:25:48.904801  241645 round_trippers.go:580]     Audit-Id: cec37ccb-083a-4dc1-8ecd-8a07eb4a8ef6
	I0830 20:25:48.904809  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:25:48.904817  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:25:48.904824  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:25:48.904832  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:25:48.904840  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:25:48 GMT
	I0830 20:25:48.905169  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"340","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0830 20:25:49.402570  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
	I0830 20:25:49.402603  241645 round_trippers.go:469] Request Headers:
	I0830 20:25:49.402615  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:25:49.402622  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:25:49.405505  241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 20:25:49.405529  241645 round_trippers.go:577] Response Headers:
	I0830 20:25:49.405537  241645 round_trippers.go:580]     Audit-Id: 3dcca4d9-8d36-43f6-8089-4416c0a52b54
	I0830 20:25:49.405543  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:25:49.405548  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:25:49.405554  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:25:49.405559  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:25:49.405564  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:25:49 GMT
	I0830 20:25:49.405910  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"340","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0830 20:25:49.902661  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
	I0830 20:25:49.902692  241645 round_trippers.go:469] Request Headers:
	I0830 20:25:49.902701  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:25:49.902711  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:25:49.905796  241645 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 20:25:49.905815  241645 round_trippers.go:577] Response Headers:
	I0830 20:25:49.905823  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:25:49 GMT
	I0830 20:25:49.905828  241645 round_trippers.go:580]     Audit-Id: 70287395-6632-4054-954e-1b7b8265acd1
	I0830 20:25:49.905834  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:25:49.905839  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:25:49.905844  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:25:49.905850  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:25:49.906011  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"340","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0830 20:25:50.402755  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
	I0830 20:25:50.402783  241645 round_trippers.go:469] Request Headers:
	I0830 20:25:50.402800  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:25:50.402806  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:25:50.405308  241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 20:25:50.405330  241645 round_trippers.go:577] Response Headers:
	I0830 20:25:50.405340  241645 round_trippers.go:580]     Audit-Id: 555d00a0-47e9-48eb-9f6e-75cb3deed87b
	I0830 20:25:50.405349  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:25:50.405358  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:25:50.405369  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:25:50.405382  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:25:50.405395  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:25:50 GMT
	I0830 20:25:50.405514  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"340","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0830 20:25:50.405837  241645 node_ready.go:58] node "multinode-944570" has status "Ready":"False"
	I0830 20:25:50.902187  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
	I0830 20:25:50.902217  241645 round_trippers.go:469] Request Headers:
	I0830 20:25:50.902227  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:25:50.902234  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:25:50.904974  241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 20:25:50.904994  241645 round_trippers.go:577] Response Headers:
	I0830 20:25:50.905001  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:25:50.905006  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:25:50.905011  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:25:50.905017  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:25:50 GMT
	I0830 20:25:50.905022  241645 round_trippers.go:580]     Audit-Id: be5bf54f-9ed7-491a-81c1-2dca62f67931
	I0830 20:25:50.905027  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:25:50.905260  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"340","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0830 20:25:51.401947  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
	I0830 20:25:51.401975  241645 round_trippers.go:469] Request Headers:
	I0830 20:25:51.401986  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:25:51.401993  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:25:51.404606  241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 20:25:51.404635  241645 round_trippers.go:577] Response Headers:
	I0830 20:25:51.404646  241645 round_trippers.go:580]     Audit-Id: 998ca9c7-7c14-411a-9f17-846800bea298
	I0830 20:25:51.404655  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:25:51.404663  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:25:51.404671  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:25:51.404679  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:25:51.404696  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:25:51 GMT
	I0830 20:25:51.404842  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"433","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0830 20:25:51.405248  241645 node_ready.go:49] node "multinode-944570" has status "Ready":"True"
	I0830 20:25:51.405272  241645 node_ready.go:38] duration metric: took 12.010502433s waiting for node "multinode-944570" to be "Ready" ...
	I0830 20:25:51.405284  241645 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 20:25:51.405739  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods
	I0830 20:25:51.405770  241645 round_trippers.go:469] Request Headers:
	I0830 20:25:51.405784  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:25:51.405795  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:25:51.410276  241645 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0830 20:25:51.410301  241645 round_trippers.go:577] Response Headers:
	I0830 20:25:51.410312  241645 round_trippers.go:580]     Audit-Id: 318ccf73-0310-4bf4-a0ee-6ab55023c120
	I0830 20:25:51.410324  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:25:51.410336  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:25:51.410344  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:25:51.410356  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:25:51.410366  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:25:51 GMT
	I0830 20:25:51.411112  241645 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"439"},"items":[{"metadata":{"name":"coredns-5dd5756b68-lzj6n","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"19a6c9fa-86e0-4e7f-a62b-28ee984bdd45","resourceVersion":"439","creationTimestamp":"2023-08-30T20:25:38Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"45f04028-4f16-400d-9690-5524a005f3c9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:25:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45f04028-4f16-400d-9690-5524a005f3c9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54013 chars]
	I0830 20:25:51.415163  241645 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-lzj6n" in "kube-system" namespace to be "Ready" ...
	I0830 20:25:51.415240  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lzj6n
	I0830 20:25:51.415251  241645 round_trippers.go:469] Request Headers:
	I0830 20:25:51.415260  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:25:51.415273  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:25:51.417466  241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 20:25:51.417482  241645 round_trippers.go:577] Response Headers:
	I0830 20:25:51.417492  241645 round_trippers.go:580]     Audit-Id: 1f20a164-cab7-42e1-be1a-a1b0c1d0cd4f
	I0830 20:25:51.417501  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:25:51.417510  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:25:51.417521  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:25:51.417533  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:25:51.417545  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:25:51 GMT
	I0830 20:25:51.417678  241645 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lzj6n","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"19a6c9fa-86e0-4e7f-a62b-28ee984bdd45","resourceVersion":"439","creationTimestamp":"2023-08-30T20:25:38Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"45f04028-4f16-400d-9690-5524a005f3c9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:25:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45f04028-4f16-400d-9690-5524a005f3c9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0830 20:25:51.418030  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
	I0830 20:25:51.418039  241645 round_trippers.go:469] Request Headers:
	I0830 20:25:51.418047  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:25:51.418056  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:25:51.420518  241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 20:25:51.420538  241645 round_trippers.go:577] Response Headers:
	I0830 20:25:51.420547  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:25:51.420555  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:25:51.420564  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:25:51.420570  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:25:51.420576  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:25:51 GMT
	I0830 20:25:51.420582  241645 round_trippers.go:580]     Audit-Id: 4737799e-e74d-46b8-8b93-d160240a4e8b
	I0830 20:25:51.420783  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"433","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0830 20:25:51.421058  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lzj6n
	I0830 20:25:51.421069  241645 round_trippers.go:469] Request Headers:
	I0830 20:25:51.421076  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:25:51.421082  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:25:51.423319  241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 20:25:51.423338  241645 round_trippers.go:577] Response Headers:
	I0830 20:25:51.423345  241645 round_trippers.go:580]     Audit-Id: 01c00356-2e83-4f92-bfff-de45b77b2fe4
	I0830 20:25:51.423371  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:25:51.423380  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:25:51.423394  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:25:51.423406  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:25:51.423414  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:25:51 GMT
	I0830 20:25:51.423757  241645 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lzj6n","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"19a6c9fa-86e0-4e7f-a62b-28ee984bdd45","resourceVersion":"439","creationTimestamp":"2023-08-30T20:25:38Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"45f04028-4f16-400d-9690-5524a005f3c9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:25:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45f04028-4f16-400d-9690-5524a005f3c9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0830 20:25:51.424078  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
	I0830 20:25:51.424089  241645 round_trippers.go:469] Request Headers:
	I0830 20:25:51.424096  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:25:51.424102  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:25:51.426234  241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 20:25:51.426249  241645 round_trippers.go:577] Response Headers:
	I0830 20:25:51.426258  241645 round_trippers.go:580]     Audit-Id: cf575695-430b-4957-8271-7d258140d436
	I0830 20:25:51.426266  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:25:51.426274  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:25:51.426283  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:25:51.426295  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:25:51.426307  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:25:51 GMT
	I0830 20:25:51.426465  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"433","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0830 20:25:51.927168  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lzj6n
	I0830 20:25:51.927198  241645 round_trippers.go:469] Request Headers:
	I0830 20:25:51.927212  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:25:51.927222  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:25:51.935139  241645 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0830 20:25:51.935167  241645 round_trippers.go:577] Response Headers:
	I0830 20:25:51.935176  241645 round_trippers.go:580]     Audit-Id: 88136e6f-52ef-4c98-89c2-a414b8728a84
	I0830 20:25:51.935185  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:25:51.935198  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:25:51.935206  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:25:51.935216  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:25:51.935228  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:25:51 GMT
	I0830 20:25:51.935378  241645 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lzj6n","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"19a6c9fa-86e0-4e7f-a62b-28ee984bdd45","resourceVersion":"439","creationTimestamp":"2023-08-30T20:25:38Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"45f04028-4f16-400d-9690-5524a005f3c9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:25:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45f04028-4f16-400d-9690-5524a005f3c9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0830 20:25:51.935850  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
	I0830 20:25:51.935865  241645 round_trippers.go:469] Request Headers:
	I0830 20:25:51.935876  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:25:51.935885  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:25:51.938507  241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 20:25:51.938527  241645 round_trippers.go:577] Response Headers:
	I0830 20:25:51.938538  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:25:51.938547  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:25:51 GMT
	I0830 20:25:51.938556  241645 round_trippers.go:580]     Audit-Id: b57d91dc-97ce-49ce-8561-89b8e9e191a5
	I0830 20:25:51.938566  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:25:51.938572  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:25:51.938580  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:25:51.938737  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"433","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0830 20:25:52.427453  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lzj6n
	I0830 20:25:52.427478  241645 round_trippers.go:469] Request Headers:
	I0830 20:25:52.427487  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:25:52.427494  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:25:52.429902  241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 20:25:52.429926  241645 round_trippers.go:577] Response Headers:
	I0830 20:25:52.429936  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:25:52.429946  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:25:52.429954  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:25:52 GMT
	I0830 20:25:52.429962  241645 round_trippers.go:580]     Audit-Id: 66f717e8-665a-4771-bae2-06e3252d915e
	I0830 20:25:52.429974  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:25:52.429986  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:25:52.430112  241645 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lzj6n","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"19a6c9fa-86e0-4e7f-a62b-28ee984bdd45","resourceVersion":"439","creationTimestamp":"2023-08-30T20:25:38Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"45f04028-4f16-400d-9690-5524a005f3c9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:25:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45f04028-4f16-400d-9690-5524a005f3c9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0830 20:25:52.430595  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
	I0830 20:25:52.430610  241645 round_trippers.go:469] Request Headers:
	I0830 20:25:52.430617  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:25:52.430625  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:25:52.432675  241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 20:25:52.432698  241645 round_trippers.go:577] Response Headers:
	I0830 20:25:52.432707  241645 round_trippers.go:580]     Audit-Id: 7a99663d-37d9-46ee-ad2c-7695700250f3
	I0830 20:25:52.432716  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:25:52.432728  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:25:52.432736  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:25:52.432748  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:25:52.432758  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:25:52 GMT
	I0830 20:25:52.432986  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"433","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0830 20:25:52.927683  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lzj6n
	I0830 20:25:52.927708  241645 round_trippers.go:469] Request Headers:
	I0830 20:25:52.927721  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:25:52.927727  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:25:52.930440  241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 20:25:52.930468  241645 round_trippers.go:577] Response Headers:
	I0830 20:25:52.930482  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:25:52.930491  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:25:52.930503  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:25:52.930519  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:25:52.930528  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:25:52 GMT
	I0830 20:25:52.930540  241645 round_trippers.go:580]     Audit-Id: db550cd4-45d2-42cd-afdf-84988dbaabdb
	I0830 20:25:52.930668  241645 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lzj6n","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"19a6c9fa-86e0-4e7f-a62b-28ee984bdd45","resourceVersion":"439","creationTimestamp":"2023-08-30T20:25:38Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"45f04028-4f16-400d-9690-5524a005f3c9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:25:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45f04028-4f16-400d-9690-5524a005f3c9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0830 20:25:52.931273  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
	I0830 20:25:52.931291  241645 round_trippers.go:469] Request Headers:
	I0830 20:25:52.931320  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:25:52.931338  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:25:52.933758  241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 20:25:52.933775  241645 round_trippers.go:577] Response Headers:
	I0830 20:25:52.933782  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:25:52 GMT
	I0830 20:25:52.933790  241645 round_trippers.go:580]     Audit-Id: 4435011a-c7da-4513-9c65-ce98e9701347
	I0830 20:25:52.933798  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:25:52.933806  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:25:52.933818  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:25:52.933828  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:25:52.934216  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"433","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0830 20:25:53.427702  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lzj6n
	I0830 20:25:53.427726  241645 round_trippers.go:469] Request Headers:
	I0830 20:25:53.427734  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:25:53.427740  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:25:53.430538  241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 20:25:53.430558  241645 round_trippers.go:577] Response Headers:
	I0830 20:25:53.430566  241645 round_trippers.go:580]     Audit-Id: ee96fe6e-d14f-4968-8d4e-9615df62c767
	I0830 20:25:53.430572  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:25:53.430577  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:25:53.430583  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:25:53.430603  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:25:53.430614  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:25:53 GMT
	I0830 20:25:53.430828  241645 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lzj6n","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"19a6c9fa-86e0-4e7f-a62b-28ee984bdd45","resourceVersion":"453","creationTimestamp":"2023-08-30T20:25:38Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"45f04028-4f16-400d-9690-5524a005f3c9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:25:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45f04028-4f16-400d-9690-5524a005f3c9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6284 chars]
	I0830 20:25:53.431371  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
	I0830 20:25:53.431385  241645 round_trippers.go:469] Request Headers:
	I0830 20:25:53.431394  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:25:53.431400  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:25:53.433284  241645 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0830 20:25:53.433299  241645 round_trippers.go:577] Response Headers:
	I0830 20:25:53.433305  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:25:53.433312  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:25:53.433320  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:25:53 GMT
	I0830 20:25:53.433330  241645 round_trippers.go:580]     Audit-Id: 4baf1276-1dc1-4a26-9607-4d7b8235d7c6
	I0830 20:25:53.433343  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:25:53.433351  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:25:53.433553  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"433","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0830 20:25:53.433911  241645 pod_ready.go:92] pod "coredns-5dd5756b68-lzj6n" in "kube-system" namespace has status "Ready":"True"
	I0830 20:25:53.433928  241645 pod_ready.go:81] duration metric: took 2.018737231s waiting for pod "coredns-5dd5756b68-lzj6n" in "kube-system" namespace to be "Ready" ...
	I0830 20:25:53.433941  241645 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-944570" in "kube-system" namespace to be "Ready" ...
	I0830 20:25:53.434004  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-944570
	I0830 20:25:53.434013  241645 round_trippers.go:469] Request Headers:
	I0830 20:25:53.434024  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:25:53.434038  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:25:53.435720  241645 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0830 20:25:53.435740  241645 round_trippers.go:577] Response Headers:
	I0830 20:25:53.435750  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:25:53.435758  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:25:53 GMT
	I0830 20:25:53.435769  241645 round_trippers.go:580]     Audit-Id: 63760209-2707-45ea-ac73-2e8482dfde07
	I0830 20:25:53.435777  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:25:53.435786  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:25:53.435794  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:25:53.435912  241645 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-944570","namespace":"kube-system","uid":"8a7e3daf-bab9-401d-9448-0dd7a1710cc9","resourceVersion":"424","creationTimestamp":"2023-08-30T20:25:26Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.254:2379","kubernetes.io/config.hash":"fb846e75466869998dbb9a265eafadb1","kubernetes.io/config.mirror":"fb846e75466869998dbb9a265eafadb1","kubernetes.io/config.seen":"2023-08-30T20:25:25.839839858Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:25:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5872 chars]
	I0830 20:25:53.436374  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
	I0830 20:25:53.436390  241645 round_trippers.go:469] Request Headers:
	I0830 20:25:53.436401  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:25:53.436418  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:25:53.438201  241645 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0830 20:25:53.438220  241645 round_trippers.go:577] Response Headers:
	I0830 20:25:53.438227  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:25:53.438233  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:25:53 GMT
	I0830 20:25:53.438239  241645 round_trippers.go:580]     Audit-Id: 178cde66-8334-4942-b2e7-c1e2fd6c2850
	I0830 20:25:53.438248  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:25:53.438260  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:25:53.438268  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:25:53.438434  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"433","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0830 20:25:53.438740  241645 pod_ready.go:92] pod "etcd-multinode-944570" in "kube-system" namespace has status "Ready":"True"
	I0830 20:25:53.438754  241645 pod_ready.go:81] duration metric: took 4.805533ms waiting for pod "etcd-multinode-944570" in "kube-system" namespace to be "Ready" ...
	I0830 20:25:53.438767  241645 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-944570" in "kube-system" namespace to be "Ready" ...
	I0830 20:25:53.438834  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-944570
	I0830 20:25:53.438844  241645 round_trippers.go:469] Request Headers:
	I0830 20:25:53.438852  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:25:53.438864  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:25:53.440664  241645 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0830 20:25:53.440677  241645 round_trippers.go:577] Response Headers:
	I0830 20:25:53.440686  241645 round_trippers.go:580]     Audit-Id: b7e154a0-ffca-4afc-b69f-b08201308e2c
	I0830 20:25:53.440692  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:25:53.440697  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:25:53.440706  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:25:53.440723  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:25:53.440733  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:25:53 GMT
	I0830 20:25:53.440870  241645 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-944570","namespace":"kube-system","uid":"396cdb5a-0161-4c66-8588-6c1c62cae7be","resourceVersion":"425","creationTimestamp":"2023-08-30T20:25:26Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.254:8443","kubernetes.io/config.hash":"5c113dc76381297356051f3bc6bc6fd1","kubernetes.io/config.mirror":"5c113dc76381297356051f3bc6bc6fd1","kubernetes.io/config.seen":"2023-08-30T20:25:25.839841108Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:25:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7408 chars]
	I0830 20:25:53.441246  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
	I0830 20:25:53.441258  241645 round_trippers.go:469] Request Headers:
	I0830 20:25:53.441265  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:25:53.441272  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:25:53.442994  241645 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0830 20:25:53.443005  241645 round_trippers.go:577] Response Headers:
	I0830 20:25:53.443011  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:25:53.443016  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:25:53 GMT
	I0830 20:25:53.443021  241645 round_trippers.go:580]     Audit-Id: 3e8feefa-96d4-45fc-bc26-dc479e95efc1
	I0830 20:25:53.443027  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:25:53.443035  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:25:53.443050  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:25:53.443288  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"433","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0830 20:25:53.443534  241645 pod_ready.go:92] pod "kube-apiserver-multinode-944570" in "kube-system" namespace has status "Ready":"True"
	I0830 20:25:53.443546  241645 pod_ready.go:81] duration metric: took 4.768245ms waiting for pod "kube-apiserver-multinode-944570" in "kube-system" namespace to be "Ready" ...
	I0830 20:25:53.443554  241645 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-944570" in "kube-system" namespace to be "Ready" ...
	I0830 20:25:53.443605  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-944570
	I0830 20:25:53.443612  241645 round_trippers.go:469] Request Headers:
	I0830 20:25:53.443619  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:25:53.443625  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:25:53.445430  241645 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0830 20:25:53.445442  241645 round_trippers.go:577] Response Headers:
	I0830 20:25:53.445447  241645 round_trippers.go:580]     Audit-Id: 074beef7-3b15-4d9b-ba05-713c29a5fcd7
	I0830 20:25:53.445453  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:25:53.445459  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:25:53.445466  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:25:53.445477  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:25:53.445493  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:25:53 GMT
	I0830 20:25:53.445633  241645 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-944570","namespace":"kube-system","uid":"6666fc21-62a9-4141-bb88-71bd4fe72b40","resourceVersion":"421","creationTimestamp":"2023-08-30T20:25:26Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ed3bbefd4c2f35595e2c0897a29a0a1c","kubernetes.io/config.mirror":"ed3bbefd4c2f35595e2c0897a29a0a1c","kubernetes.io/config.seen":"2023-08-30T20:25:25.839841993Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:25:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6973 chars]
	I0830 20:25:53.446053  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
	I0830 20:25:53.446067  241645 round_trippers.go:469] Request Headers:
	I0830 20:25:53.446076  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:25:53.446082  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:25:53.448153  241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 20:25:53.448167  241645 round_trippers.go:577] Response Headers:
	I0830 20:25:53.448173  241645 round_trippers.go:580]     Audit-Id: 1777e997-989d-4172-bc3d-dd380b42e61b
	I0830 20:25:53.448178  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:25:53.448183  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:25:53.448188  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:25:53.448199  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:25:53.448210  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:25:53 GMT
	I0830 20:25:53.448331  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"433","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0830 20:25:53.448644  241645 pod_ready.go:92] pod "kube-controller-manager-multinode-944570" in "kube-system" namespace has status "Ready":"True"
	I0830 20:25:53.448660  241645 pod_ready.go:81] duration metric: took 5.097503ms waiting for pod "kube-controller-manager-multinode-944570" in "kube-system" namespace to be "Ready" ...
	I0830 20:25:53.448672  241645 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nqnp2" in "kube-system" namespace to be "Ready" ...
	I0830 20:25:53.602018  241645 request.go:629] Waited for 153.258705ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nqnp2
	I0830 20:25:53.602085  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nqnp2
	I0830 20:25:53.602089  241645 round_trippers.go:469] Request Headers:
	I0830 20:25:53.602097  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:25:53.602104  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:25:53.604877  241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 20:25:53.604903  241645 round_trippers.go:577] Response Headers:
	I0830 20:25:53.604913  241645 round_trippers.go:580]     Audit-Id: 06bcaea7-4871-4beb-ae16-50497caeae81
	I0830 20:25:53.604919  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:25:53.604924  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:25:53.604930  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:25:53.604935  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:25:53.604940  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:25:53 GMT
	I0830 20:25:53.605174  241645 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-nqnp2","generateName":"kube-proxy-","namespace":"kube-system","uid":"fc7f17e0-b6ac-48c3-b449-e4eb3325505c","resourceVersion":"408","creationTimestamp":"2023-08-30T20:25:38Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"77539e61-eb1a-4d08-91c1-22ad50311843","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:25:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77539e61-eb1a-4d08-91c1-22ad50311843\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5537 chars]
	I0830 20:25:53.803020  241645 request.go:629] Waited for 197.404051ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.254:8443/api/v1/nodes/multinode-944570
	I0830 20:25:53.803099  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
	I0830 20:25:53.803104  241645 round_trippers.go:469] Request Headers:
	I0830 20:25:53.803114  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:25:53.803124  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:25:53.806197  241645 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 20:25:53.806217  241645 round_trippers.go:577] Response Headers:
	I0830 20:25:53.806225  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:25:53.806230  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:25:53 GMT
	I0830 20:25:53.806235  241645 round_trippers.go:580]     Audit-Id: 9bd2bc2e-9df3-41ab-a330-44333d74012c
	I0830 20:25:53.806241  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:25:53.806246  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:25:53.806251  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:25:53.806482  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"433","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0830 20:25:53.806824  241645 pod_ready.go:92] pod "kube-proxy-nqnp2" in "kube-system" namespace has status "Ready":"True"
	I0830 20:25:53.806839  241645 pod_ready.go:81] duration metric: took 358.15537ms waiting for pod "kube-proxy-nqnp2" in "kube-system" namespace to be "Ready" ...
	I0830 20:25:53.806848  241645 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-944570" in "kube-system" namespace to be "Ready" ...
	I0830 20:25:54.002316  241645 request.go:629] Waited for 195.376223ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-944570
	I0830 20:25:54.002377  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-944570
	I0830 20:25:54.002382  241645 round_trippers.go:469] Request Headers:
	I0830 20:25:54.002390  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:25:54.002397  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:25:54.005347  241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 20:25:54.005367  241645 round_trippers.go:577] Response Headers:
	I0830 20:25:54.005380  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:25:53 GMT
	I0830 20:25:54.005395  241645 round_trippers.go:580]     Audit-Id: 72efda1c-496f-4b60-a340-d541d4d7d460
	I0830 20:25:54.005406  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:25:54.005415  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:25:54.005426  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:25:54.005433  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:25:54.005537  241645 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-944570","namespace":"kube-system","uid":"c2c628f7-bc4f-4f01-b67d-e105c72b8275","resourceVersion":"422","creationTimestamp":"2023-08-30T20:25:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"21d92ce9120286f1f3c68c1f19570340","kubernetes.io/config.mirror":"21d92ce9120286f1f3c68c1f19570340","kubernetes.io/config.seen":"2023-08-30T20:25:25.839835923Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:25:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4703 chars]
	I0830 20:25:54.202399  241645 request.go:629] Waited for 196.421645ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.254:8443/api/v1/nodes/multinode-944570
	I0830 20:25:54.202474  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
	I0830 20:25:54.202482  241645 round_trippers.go:469] Request Headers:
	I0830 20:25:54.202494  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:25:54.202504  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:25:54.206670  241645 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0830 20:25:54.206691  241645 round_trippers.go:577] Response Headers:
	I0830 20:25:54.206698  241645 round_trippers.go:580]     Audit-Id: d91318e8-8e2b-40d3-9054-c77c030fab26
	I0830 20:25:54.206704  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:25:54.206718  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:25:54.206739  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:25:54.206752  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:25:54.206760  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:25:54 GMT
	I0830 20:25:54.207442  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"433","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0830 20:25:54.207829  241645 pod_ready.go:92] pod "kube-scheduler-multinode-944570" in "kube-system" namespace has status "Ready":"True"
	I0830 20:25:54.207847  241645 pod_ready.go:81] duration metric: took 400.992914ms waiting for pod "kube-scheduler-multinode-944570" in "kube-system" namespace to be "Ready" ...
	I0830 20:25:54.207861  241645 pod_ready.go:38] duration metric: took 2.802537009s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 20:25:54.207889  241645 api_server.go:52] waiting for apiserver process to appear ...
	I0830 20:25:54.207951  241645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 20:25:54.230824  241645 command_runner.go:130] > 1820
	I0830 20:25:54.230859  241645 api_server.go:72] duration metric: took 15.085488796s to wait for apiserver process to appear ...
	I0830 20:25:54.230867  241645 api_server.go:88] waiting for apiserver healthz status ...
	I0830 20:25:54.230884  241645 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0830 20:25:54.236429  241645 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0830 20:25:54.236488  241645 round_trippers.go:463] GET https://192.168.39.254:8443/version
	I0830 20:25:54.236495  241645 round_trippers.go:469] Request Headers:
	I0830 20:25:54.236503  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:25:54.236512  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:25:54.237485  241645 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0830 20:25:54.237500  241645 round_trippers.go:577] Response Headers:
	I0830 20:25:54.237509  241645 round_trippers.go:580]     Content-Length: 263
	I0830 20:25:54.237517  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:25:54 GMT
	I0830 20:25:54.237526  241645 round_trippers.go:580]     Audit-Id: 062201ad-365d-44be-ac37-e52b08304abc
	I0830 20:25:54.237541  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:25:54.237549  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:25:54.237561  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:25:54.237570  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:25:54.237593  241645 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.1",
	  "gitCommit": "8dc49c4b984b897d423aab4971090e1879eb4f23",
	  "gitTreeState": "clean",
	  "buildDate": "2023-08-24T11:16:30Z",
	  "goVersion": "go1.20.7",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0830 20:25:54.237691  241645 api_server.go:141] control plane version: v1.28.1
	I0830 20:25:54.237706  241645 api_server.go:131] duration metric: took 6.83424ms to wait for apiserver health ...
	I0830 20:25:54.237713  241645 system_pods.go:43] waiting for kube-system pods to appear ...
	I0830 20:25:54.402053  241645 request.go:629] Waited for 164.268495ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods
	I0830 20:25:54.402139  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods
	I0830 20:25:54.402144  241645 round_trippers.go:469] Request Headers:
	I0830 20:25:54.402152  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:25:54.402159  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:25:54.405537  241645 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 20:25:54.405565  241645 round_trippers.go:577] Response Headers:
	I0830 20:25:54.405576  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:25:54.405584  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:25:54.405592  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:25:54.405601  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:25:54 GMT
	I0830 20:25:54.405609  241645 round_trippers.go:580]     Audit-Id: 585a03e6-be8d-4612-a2ee-0f655d6fa953
	I0830 20:25:54.405617  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:25:54.406496  241645 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"459"},"items":[{"metadata":{"name":"coredns-5dd5756b68-lzj6n","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"19a6c9fa-86e0-4e7f-a62b-28ee984bdd45","resourceVersion":"453","creationTimestamp":"2023-08-30T20:25:38Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"45f04028-4f16-400d-9690-5524a005f3c9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:25:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45f04028-4f16-400d-9690-5524a005f3c9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54129 chars]
	I0830 20:25:54.408224  241645 system_pods.go:59] 8 kube-system pods found
	I0830 20:25:54.408252  241645 system_pods.go:61] "coredns-5dd5756b68-lzj6n" [19a6c9fa-86e0-4e7f-a62b-28ee984bdd45] Running
	I0830 20:25:54.408260  241645 system_pods.go:61] "etcd-multinode-944570" [8a7e3daf-bab9-401d-9448-0dd7a1710cc9] Running
	I0830 20:25:54.408266  241645 system_pods.go:61] "kindnet-mm2wq" [59593f9a-5462-4392-8bdc-a8150d335166] Running
	I0830 20:25:54.408273  241645 system_pods.go:61] "kube-apiserver-multinode-944570" [396cdb5a-0161-4c66-8588-6c1c62cae7be] Running
	I0830 20:25:54.408280  241645 system_pods.go:61] "kube-controller-manager-multinode-944570" [6666fc21-62a9-4141-bb88-71bd4fe72b40] Running
	I0830 20:25:54.408287  241645 system_pods.go:61] "kube-proxy-nqnp2" [fc7f17e0-b6ac-48c3-b449-e4eb3325505c] Running
	I0830 20:25:54.408294  241645 system_pods.go:61] "kube-scheduler-multinode-944570" [c2c628f7-bc4f-4f01-b67d-e105c72b8275] Running
	I0830 20:25:54.408304  241645 system_pods.go:61] "storage-provisioner" [4e79c194-f047-45a2-9ed4-ffafbe983cda] Running
	I0830 20:25:54.408311  241645 system_pods.go:74] duration metric: took 170.591918ms to wait for pod list to return data ...
	I0830 20:25:54.408321  241645 default_sa.go:34] waiting for default service account to be created ...
	I0830 20:25:54.602843  241645 request.go:629] Waited for 194.410178ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.254:8443/api/v1/namespaces/default/serviceaccounts
	I0830 20:25:54.602920  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/namespaces/default/serviceaccounts
	I0830 20:25:54.602933  241645 round_trippers.go:469] Request Headers:
	I0830 20:25:54.602945  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:25:54.602956  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:25:54.605658  241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 20:25:54.605691  241645 round_trippers.go:577] Response Headers:
	I0830 20:25:54.605702  241645 round_trippers.go:580]     Audit-Id: c71947ca-1ac3-4883-ac79-cfee40bbb882
	I0830 20:25:54.605709  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:25:54.605717  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:25:54.605726  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:25:54.605739  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:25:54.605748  241645 round_trippers.go:580]     Content-Length: 261
	I0830 20:25:54.605761  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:25:54 GMT
	I0830 20:25:54.605789  241645 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"459"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"a3dad9c1-08ae-4f4f-834f-75347ebf1272","resourceVersion":"344","creationTimestamp":"2023-08-30T20:25:38Z"}}]}
	I0830 20:25:54.605996  241645 default_sa.go:45] found service account: "default"
	I0830 20:25:54.606014  241645 default_sa.go:55] duration metric: took 197.685249ms for default service account to be created ...
	I0830 20:25:54.606025  241645 system_pods.go:116] waiting for k8s-apps to be running ...
	I0830 20:25:54.802521  241645 request.go:629] Waited for 196.399828ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods
	I0830 20:25:54.802586  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods
	I0830 20:25:54.802597  241645 round_trippers.go:469] Request Headers:
	I0830 20:25:54.802608  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:25:54.802619  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:25:54.806528  241645 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 20:25:54.806551  241645 round_trippers.go:577] Response Headers:
	I0830 20:25:54.806561  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:25:54.806570  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:25:54.806577  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:25:54.806585  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:25:54.806593  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:25:54 GMT
	I0830 20:25:54.806602  241645 round_trippers.go:580]     Audit-Id: 0a6c2ba5-2180-48a8-9064-77b4d96ce009
	I0830 20:25:54.807627  241645 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"459"},"items":[{"metadata":{"name":"coredns-5dd5756b68-lzj6n","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"19a6c9fa-86e0-4e7f-a62b-28ee984bdd45","resourceVersion":"453","creationTimestamp":"2023-08-30T20:25:38Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"45f04028-4f16-400d-9690-5524a005f3c9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:25:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45f04028-4f16-400d-9690-5524a005f3c9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54129 chars]
	I0830 20:25:54.809307  241645 system_pods.go:86] 8 kube-system pods found
	I0830 20:25:54.809329  241645 system_pods.go:89] "coredns-5dd5756b68-lzj6n" [19a6c9fa-86e0-4e7f-a62b-28ee984bdd45] Running
	I0830 20:25:54.809338  241645 system_pods.go:89] "etcd-multinode-944570" [8a7e3daf-bab9-401d-9448-0dd7a1710cc9] Running
	I0830 20:25:54.809344  241645 system_pods.go:89] "kindnet-mm2wq" [59593f9a-5462-4392-8bdc-a8150d335166] Running
	I0830 20:25:54.809350  241645 system_pods.go:89] "kube-apiserver-multinode-944570" [396cdb5a-0161-4c66-8588-6c1c62cae7be] Running
	I0830 20:25:54.809358  241645 system_pods.go:89] "kube-controller-manager-multinode-944570" [6666fc21-62a9-4141-bb88-71bd4fe72b40] Running
	I0830 20:25:54.809365  241645 system_pods.go:89] "kube-proxy-nqnp2" [fc7f17e0-b6ac-48c3-b449-e4eb3325505c] Running
	I0830 20:25:54.809375  241645 system_pods.go:89] "kube-scheduler-multinode-944570" [c2c628f7-bc4f-4f01-b67d-e105c72b8275] Running
	I0830 20:25:54.809382  241645 system_pods.go:89] "storage-provisioner" [4e79c194-f047-45a2-9ed4-ffafbe983cda] Running
	I0830 20:25:54.809392  241645 system_pods.go:126] duration metric: took 203.361169ms to wait for k8s-apps to be running ...
	I0830 20:25:54.809405  241645 system_svc.go:44] waiting for kubelet service to be running ....
	I0830 20:25:54.809457  241645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 20:25:54.822614  241645 system_svc.go:56] duration metric: took 13.200237ms WaitForService to wait for kubelet.
	I0830 20:25:54.822640  241645 kubeadm.go:581] duration metric: took 15.677269744s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0830 20:25:54.822661  241645 node_conditions.go:102] verifying NodePressure condition ...
	I0830 20:25:55.002030  241645 request.go:629] Waited for 179.292452ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.254:8443/api/v1/nodes
	I0830 20:25:55.002106  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes
	I0830 20:25:55.002113  241645 round_trippers.go:469] Request Headers:
	I0830 20:25:55.002125  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:25:55.002152  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:25:55.004786  241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 20:25:55.004822  241645 round_trippers.go:577] Response Headers:
	I0830 20:25:55.004837  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:25:55.004846  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:25:55.004855  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:25:55.004864  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:25:55.004872  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:25:54 GMT
	I0830 20:25:55.004881  241645 round_trippers.go:580]     Audit-Id: 66f3267a-20be-49a8-a57b-143a9b2c40a1
	I0830 20:25:55.005016  241645 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"459"},"items":[{"metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"433","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4836 chars]
	I0830 20:25:55.005476  241645 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0830 20:25:55.005504  241645 node_conditions.go:123] node cpu capacity is 2
	I0830 20:25:55.005522  241645 node_conditions.go:105] duration metric: took 182.855296ms to run NodePressure ...
	I0830 20:25:55.005536  241645 start.go:228] waiting for startup goroutines ...
	I0830 20:25:55.005545  241645 start.go:233] waiting for cluster config update ...
	I0830 20:25:55.005558  241645 start.go:242] writing updated cluster config ...
	I0830 20:25:55.008221  241645 out.go:177] 
	I0830 20:25:55.009869  241645 config.go:182] Loaded profile config "multinode-944570": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0830 20:25:55.009950  241645 profile.go:148] Saving config to /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/config.json ...
	I0830 20:25:55.011573  241645 out.go:177] * Starting worker node multinode-944570-m02 in cluster multinode-944570
	I0830 20:25:55.012857  241645 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0830 20:25:55.012880  241645 cache.go:57] Caching tarball of preloaded images
	I0830 20:25:55.012989  241645 preload.go:174] Found /home/jenkins/minikube-integration/17145-222139/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0830 20:25:55.013000  241645 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0830 20:25:55.013075  241645 profile.go:148] Saving config to /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/config.json ...
	I0830 20:25:55.013227  241645 start.go:365] acquiring machines lock for multinode-944570-m02: {Name:mk9a092bb7d2f42c1b785aa1d546d37ad26cec77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0830 20:25:55.013267  241645 start.go:369] acquired machines lock for "multinode-944570-m02" in 22.672µs
	I0830 20:25:55.013285  241645 start.go:93] Provisioning new machine with config: &{Name:multinode-944570 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.1 ClusterName:multinode-944570 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.254 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequ
ested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0830 20:25:55.013352  241645 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0830 20:25:55.015104  241645 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0830 20:25:55.015219  241645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0830 20:25:55.015253  241645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 20:25:55.029775  241645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39841
	I0830 20:25:55.030254  241645 main.go:141] libmachine: () Calling .GetVersion
	I0830 20:25:55.030745  241645 main.go:141] libmachine: Using API Version  1
	I0830 20:25:55.030765  241645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 20:25:55.031060  241645 main.go:141] libmachine: () Calling .GetMachineName
	I0830 20:25:55.031328  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetMachineName
	I0830 20:25:55.031480  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .DriverName
	I0830 20:25:55.031657  241645 start.go:159] libmachine.API.Create for "multinode-944570" (driver="kvm2")
	I0830 20:25:55.031705  241645 client.go:168] LocalClient.Create starting
	I0830 20:25:55.031741  241645 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca.pem
	I0830 20:25:55.031777  241645 main.go:141] libmachine: Decoding PEM data...
	I0830 20:25:55.031801  241645 main.go:141] libmachine: Parsing certificate...
	I0830 20:25:55.031867  241645 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17145-222139/.minikube/certs/cert.pem
	I0830 20:25:55.031894  241645 main.go:141] libmachine: Decoding PEM data...
	I0830 20:25:55.031913  241645 main.go:141] libmachine: Parsing certificate...
	I0830 20:25:55.031937  241645 main.go:141] libmachine: Running pre-create checks...
	I0830 20:25:55.031950  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .PreCreateCheck
	I0830 20:25:55.032124  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetConfigRaw
	I0830 20:25:55.032554  241645 main.go:141] libmachine: Creating machine...
	I0830 20:25:55.032574  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .Create
	I0830 20:25:55.032724  241645 main.go:141] libmachine: (multinode-944570-m02) Creating KVM machine...
	I0830 20:25:55.033893  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | found existing default KVM network
	I0830 20:25:55.033990  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | found existing private KVM network mk-multinode-944570
	I0830 20:25:55.034092  241645 main.go:141] libmachine: (multinode-944570-m02) Setting up store path in /home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570-m02 ...
	I0830 20:25:55.034118  241645 main.go:141] libmachine: (multinode-944570-m02) Building disk image from file:///home/jenkins/minikube-integration/17145-222139/.minikube/cache/iso/amd64/minikube-v1.31.0-1692872107-17120-amd64.iso
	I0830 20:25:55.034181  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | I0830 20:25:55.034075  242034 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17145-222139/.minikube
	I0830 20:25:55.034270  241645 main.go:141] libmachine: (multinode-944570-m02) Downloading /home/jenkins/minikube-integration/17145-222139/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17145-222139/.minikube/cache/iso/amd64/minikube-v1.31.0-1692872107-17120-amd64.iso...
	I0830 20:25:55.259494  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | I0830 20:25:55.259349  242034 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570-m02/id_rsa...
	I0830 20:25:55.370819  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | I0830 20:25:55.370677  242034 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570-m02/multinode-944570-m02.rawdisk...
	I0830 20:25:55.370851  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | Writing magic tar header
	I0830 20:25:55.370864  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | Writing SSH key tar header
	I0830 20:25:55.370942  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | I0830 20:25:55.370865  242034 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570-m02 ...
	I0830 20:25:55.371056  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570-m02
	I0830 20:25:55.371105  241645 main.go:141] libmachine: (multinode-944570-m02) Setting executable bit set on /home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570-m02 (perms=drwx------)
	I0830 20:25:55.371127  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17145-222139/.minikube/machines
	I0830 20:25:55.371155  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17145-222139/.minikube
	I0830 20:25:55.371174  241645 main.go:141] libmachine: (multinode-944570-m02) Setting executable bit set on /home/jenkins/minikube-integration/17145-222139/.minikube/machines (perms=drwxr-xr-x)
	I0830 20:25:55.371189  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17145-222139
	I0830 20:25:55.371204  241645 main.go:141] libmachine: (multinode-944570-m02) Setting executable bit set on /home/jenkins/minikube-integration/17145-222139/.minikube (perms=drwxr-xr-x)
	I0830 20:25:55.371219  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0830 20:25:55.371233  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | Checking permissions on dir: /home/jenkins
	I0830 20:25:55.371247  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | Checking permissions on dir: /home
	I0830 20:25:55.371261  241645 main.go:141] libmachine: (multinode-944570-m02) Setting executable bit set on /home/jenkins/minikube-integration/17145-222139 (perms=drwxrwxr-x)
	I0830 20:25:55.371279  241645 main.go:141] libmachine: (multinode-944570-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0830 20:25:55.371301  241645 main.go:141] libmachine: (multinode-944570-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0830 20:25:55.371312  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | Skipping /home - not owner
	I0830 20:25:55.371331  241645 main.go:141] libmachine: (multinode-944570-m02) Creating domain...
	I0830 20:25:55.372512  241645 main.go:141] libmachine: (multinode-944570-m02) define libvirt domain using xml: 
	I0830 20:25:55.372526  241645 main.go:141] libmachine: (multinode-944570-m02) <domain type='kvm'>
	I0830 20:25:55.372539  241645 main.go:141] libmachine: (multinode-944570-m02)   <name>multinode-944570-m02</name>
	I0830 20:25:55.372563  241645 main.go:141] libmachine: (multinode-944570-m02)   <memory unit='MiB'>2200</memory>
	I0830 20:25:55.372577  241645 main.go:141] libmachine: (multinode-944570-m02)   <vcpu>2</vcpu>
	I0830 20:25:55.372588  241645 main.go:141] libmachine: (multinode-944570-m02)   <features>
	I0830 20:25:55.372597  241645 main.go:141] libmachine: (multinode-944570-m02)     <acpi/>
	I0830 20:25:55.372604  241645 main.go:141] libmachine: (multinode-944570-m02)     <apic/>
	I0830 20:25:55.372611  241645 main.go:141] libmachine: (multinode-944570-m02)     <pae/>
	I0830 20:25:55.372619  241645 main.go:141] libmachine: (multinode-944570-m02)     
	I0830 20:25:55.372642  241645 main.go:141] libmachine: (multinode-944570-m02)   </features>
	I0830 20:25:55.372655  241645 main.go:141] libmachine: (multinode-944570-m02)   <cpu mode='host-passthrough'>
	I0830 20:25:55.372670  241645 main.go:141] libmachine: (multinode-944570-m02)   
	I0830 20:25:55.372686  241645 main.go:141] libmachine: (multinode-944570-m02)   </cpu>
	I0830 20:25:55.372696  241645 main.go:141] libmachine: (multinode-944570-m02)   <os>
	I0830 20:25:55.372704  241645 main.go:141] libmachine: (multinode-944570-m02)     <type>hvm</type>
	I0830 20:25:55.372712  241645 main.go:141] libmachine: (multinode-944570-m02)     <boot dev='cdrom'/>
	I0830 20:25:55.372719  241645 main.go:141] libmachine: (multinode-944570-m02)     <boot dev='hd'/>
	I0830 20:25:55.372726  241645 main.go:141] libmachine: (multinode-944570-m02)     <bootmenu enable='no'/>
	I0830 20:25:55.372734  241645 main.go:141] libmachine: (multinode-944570-m02)   </os>
	I0830 20:25:55.372747  241645 main.go:141] libmachine: (multinode-944570-m02)   <devices>
	I0830 20:25:55.372760  241645 main.go:141] libmachine: (multinode-944570-m02)     <disk type='file' device='cdrom'>
	I0830 20:25:55.372796  241645 main.go:141] libmachine: (multinode-944570-m02)       <source file='/home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570-m02/boot2docker.iso'/>
	I0830 20:25:55.372823  241645 main.go:141] libmachine: (multinode-944570-m02)       <target dev='hdc' bus='scsi'/>
	I0830 20:25:55.372834  241645 main.go:141] libmachine: (multinode-944570-m02)       <readonly/>
	I0830 20:25:55.372845  241645 main.go:141] libmachine: (multinode-944570-m02)     </disk>
	I0830 20:25:55.372855  241645 main.go:141] libmachine: (multinode-944570-m02)     <disk type='file' device='disk'>
	I0830 20:25:55.372868  241645 main.go:141] libmachine: (multinode-944570-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0830 20:25:55.372889  241645 main.go:141] libmachine: (multinode-944570-m02)       <source file='/home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570-m02/multinode-944570-m02.rawdisk'/>
	I0830 20:25:55.372903  241645 main.go:141] libmachine: (multinode-944570-m02)       <target dev='hda' bus='virtio'/>
	I0830 20:25:55.372916  241645 main.go:141] libmachine: (multinode-944570-m02)     </disk>
	I0830 20:25:55.372928  241645 main.go:141] libmachine: (multinode-944570-m02)     <interface type='network'>
	I0830 20:25:55.372938  241645 main.go:141] libmachine: (multinode-944570-m02)       <source network='mk-multinode-944570'/>
	I0830 20:25:55.372945  241645 main.go:141] libmachine: (multinode-944570-m02)       <model type='virtio'/>
	I0830 20:25:55.372955  241645 main.go:141] libmachine: (multinode-944570-m02)     </interface>
	I0830 20:25:55.372969  241645 main.go:141] libmachine: (multinode-944570-m02)     <interface type='network'>
	I0830 20:25:55.372984  241645 main.go:141] libmachine: (multinode-944570-m02)       <source network='default'/>
	I0830 20:25:55.372996  241645 main.go:141] libmachine: (multinode-944570-m02)       <model type='virtio'/>
	I0830 20:25:55.373009  241645 main.go:141] libmachine: (multinode-944570-m02)     </interface>
	I0830 20:25:55.373021  241645 main.go:141] libmachine: (multinode-944570-m02)     <serial type='pty'>
	I0830 20:25:55.373031  241645 main.go:141] libmachine: (multinode-944570-m02)       <target port='0'/>
	I0830 20:25:55.373040  241645 main.go:141] libmachine: (multinode-944570-m02)     </serial>
	I0830 20:25:55.373054  241645 main.go:141] libmachine: (multinode-944570-m02)     <console type='pty'>
	I0830 20:25:55.373068  241645 main.go:141] libmachine: (multinode-944570-m02)       <target type='serial' port='0'/>
	I0830 20:25:55.373080  241645 main.go:141] libmachine: (multinode-944570-m02)     </console>
	I0830 20:25:55.373101  241645 main.go:141] libmachine: (multinode-944570-m02)     <rng model='virtio'>
	I0830 20:25:55.373118  241645 main.go:141] libmachine: (multinode-944570-m02)       <backend model='random'>/dev/random</backend>
	I0830 20:25:55.373131  241645 main.go:141] libmachine: (multinode-944570-m02)     </rng>
	I0830 20:25:55.373140  241645 main.go:141] libmachine: (multinode-944570-m02)     
	I0830 20:25:55.373154  241645 main.go:141] libmachine: (multinode-944570-m02)     
	I0830 20:25:55.373166  241645 main.go:141] libmachine: (multinode-944570-m02)   </devices>
	I0830 20:25:55.373179  241645 main.go:141] libmachine: (multinode-944570-m02) </domain>
	I0830 20:25:55.373194  241645 main.go:141] libmachine: (multinode-944570-m02) 
	I0830 20:25:55.380007  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:20:a2:d3 in network default
	I0830 20:25:55.380545  241645 main.go:141] libmachine: (multinode-944570-m02) Ensuring networks are active...
	I0830 20:25:55.380571  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
	I0830 20:25:55.381242  241645 main.go:141] libmachine: (multinode-944570-m02) Ensuring network default is active
	I0830 20:25:55.381595  241645 main.go:141] libmachine: (multinode-944570-m02) Ensuring network mk-multinode-944570 is active
	I0830 20:25:55.381918  241645 main.go:141] libmachine: (multinode-944570-m02) Getting domain xml...
	I0830 20:25:55.382580  241645 main.go:141] libmachine: (multinode-944570-m02) Creating domain...
	I0830 20:25:56.608809  241645 main.go:141] libmachine: (multinode-944570-m02) Waiting to get IP...
	I0830 20:25:56.609657  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
	I0830 20:25:56.610053  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | unable to find current IP address of domain multinode-944570-m02 in network mk-multinode-944570
	I0830 20:25:56.610090  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | I0830 20:25:56.610039  242034 retry.go:31] will retry after 302.606474ms: waiting for machine to come up
	I0830 20:25:56.914633  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
	I0830 20:25:56.915071  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | unable to find current IP address of domain multinode-944570-m02 in network mk-multinode-944570
	I0830 20:25:56.915100  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | I0830 20:25:56.915033  242034 retry.go:31] will retry after 375.67518ms: waiting for machine to come up
	I0830 20:25:57.292648  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
	I0830 20:25:57.293041  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | unable to find current IP address of domain multinode-944570-m02 in network mk-multinode-944570
	I0830 20:25:57.293075  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | I0830 20:25:57.292974  242034 retry.go:31] will retry after 350.879029ms: waiting for machine to come up
	I0830 20:25:57.645554  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
	I0830 20:25:57.646037  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | unable to find current IP address of domain multinode-944570-m02 in network mk-multinode-944570
	I0830 20:25:57.646067  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | I0830 20:25:57.645994  242034 retry.go:31] will retry after 460.417887ms: waiting for machine to come up
	I0830 20:25:58.107592  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
	I0830 20:25:58.108052  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | unable to find current IP address of domain multinode-944570-m02 in network mk-multinode-944570
	I0830 20:25:58.108084  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | I0830 20:25:58.107995  242034 retry.go:31] will retry after 642.731127ms: waiting for machine to come up
	I0830 20:25:58.752095  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
	I0830 20:25:58.752499  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | unable to find current IP address of domain multinode-944570-m02 in network mk-multinode-944570
	I0830 20:25:58.752535  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | I0830 20:25:58.752438  242034 retry.go:31] will retry after 724.563571ms: waiting for machine to come up
	I0830 20:25:59.478464  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
	I0830 20:25:59.478907  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | unable to find current IP address of domain multinode-944570-m02 in network mk-multinode-944570
	I0830 20:25:59.478938  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | I0830 20:25:59.478851  242034 retry.go:31] will retry after 715.405729ms: waiting for machine to come up
	I0830 20:26:00.196342  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
	I0830 20:26:00.196798  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | unable to find current IP address of domain multinode-944570-m02 in network mk-multinode-944570
	I0830 20:26:00.196822  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | I0830 20:26:00.196772  242034 retry.go:31] will retry after 1.251649903s: waiting for machine to come up
	I0830 20:26:01.449666  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
	I0830 20:26:01.450189  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | unable to find current IP address of domain multinode-944570-m02 in network mk-multinode-944570
	I0830 20:26:01.450213  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | I0830 20:26:01.450164  242034 retry.go:31] will retry after 1.20189777s: waiting for machine to come up
	I0830 20:26:02.653445  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
	I0830 20:26:02.653804  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | unable to find current IP address of domain multinode-944570-m02 in network mk-multinode-944570
	I0830 20:26:02.653832  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | I0830 20:26:02.653758  242034 retry.go:31] will retry after 1.604660089s: waiting for machine to come up
	I0830 20:26:04.260497  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
	I0830 20:26:04.260956  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | unable to find current IP address of domain multinode-944570-m02 in network mk-multinode-944570
	I0830 20:26:04.260989  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | I0830 20:26:04.260891  242034 retry.go:31] will retry after 2.060538508s: waiting for machine to come up
	I0830 20:26:06.324713  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
	I0830 20:26:06.325118  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | unable to find current IP address of domain multinode-944570-m02 in network mk-multinode-944570
	I0830 20:26:06.325162  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | I0830 20:26:06.325055  242034 retry.go:31] will retry after 2.818222039s: waiting for machine to come up
	I0830 20:26:09.147034  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
	I0830 20:26:09.147441  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | unable to find current IP address of domain multinode-944570-m02 in network mk-multinode-944570
	I0830 20:26:09.147465  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | I0830 20:26:09.147406  242034 retry.go:31] will retry after 2.829546399s: waiting for machine to come up
	I0830 20:26:11.979378  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
	I0830 20:26:11.979741  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | unable to find current IP address of domain multinode-944570-m02 in network mk-multinode-944570
	I0830 20:26:11.979779  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | I0830 20:26:11.979698  242034 retry.go:31] will retry after 3.8123592s: waiting for machine to come up
	I0830 20:26:15.794149  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
	I0830 20:26:15.794665  241645 main.go:141] libmachine: (multinode-944570-m02) Found IP for machine: 192.168.39.87
	I0830 20:26:15.794700  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has current primary IP address 192.168.39.87 and MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
	I0830 20:26:15.794712  241645 main.go:141] libmachine: (multinode-944570-m02) Reserving static IP address...
	I0830 20:26:15.795045  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | unable to find host DHCP lease matching {name: "multinode-944570-m02", mac: "52:54:00:c1:a1:9d", ip: "192.168.39.87"} in network mk-multinode-944570
	I0830 20:26:15.870100  241645 main.go:141] libmachine: (multinode-944570-m02) Reserved static IP address: 192.168.39.87
	I0830 20:26:15.870137  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | Getting to WaitForSSH function...
	I0830 20:26:15.870148  241645 main.go:141] libmachine: (multinode-944570-m02) Waiting for SSH to be available...
	I0830 20:26:15.872535  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
	I0830 20:26:15.872977  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:a1:9d", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:26:09 +0000 UTC Type:0 Mac:52:54:00:c1:a1:9d Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c1:a1:9d}
	I0830 20:26:15.873014  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
	I0830 20:26:15.873101  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | Using SSH client type: external
	I0830 20:26:15.873131  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570-m02/id_rsa (-rw-------)
	I0830 20:26:15.873205  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.87 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0830 20:26:15.873234  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | About to run SSH command:
	I0830 20:26:15.873257  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | exit 0
	I0830 20:26:15.966891  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | SSH cmd err, output: <nil>: 
	I0830 20:26:15.967220  241645 main.go:141] libmachine: (multinode-944570-m02) KVM machine creation complete!
	I0830 20:26:15.967573  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetConfigRaw
	I0830 20:26:15.968143  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .DriverName
	I0830 20:26:15.968350  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .DriverName
	I0830 20:26:15.968538  241645 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0830 20:26:15.968554  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetState
	I0830 20:26:15.969933  241645 main.go:141] libmachine: Detecting operating system of created instance...
	I0830 20:26:15.969950  241645 main.go:141] libmachine: Waiting for SSH to be available...
	I0830 20:26:15.969960  241645 main.go:141] libmachine: Getting to WaitForSSH function...
	I0830 20:26:15.969971  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHHostname
	I0830 20:26:15.972134  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
	I0830 20:26:15.972480  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:a1:9d", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:26:09 +0000 UTC Type:0 Mac:52:54:00:c1:a1:9d Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-944570-m02 Clientid:01:52:54:00:c1:a1:9d}
	I0830 20:26:15.972514  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
	I0830 20:26:15.972728  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHPort
	I0830 20:26:15.972929  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHKeyPath
	I0830 20:26:15.973110  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHKeyPath
	I0830 20:26:15.973264  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHUsername
	I0830 20:26:15.973435  241645 main.go:141] libmachine: Using SSH client type: native
	I0830 20:26:15.974096  241645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0830 20:26:15.974115  241645 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0830 20:26:16.098476  241645 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 20:26:16.098501  241645 main.go:141] libmachine: Detecting the provisioner...
	I0830 20:26:16.098510  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHHostname
	I0830 20:26:16.101490  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
	I0830 20:26:16.101868  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:a1:9d", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:26:09 +0000 UTC Type:0 Mac:52:54:00:c1:a1:9d Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-944570-m02 Clientid:01:52:54:00:c1:a1:9d}
	I0830 20:26:16.101898  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
	I0830 20:26:16.102042  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHPort
	I0830 20:26:16.102237  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHKeyPath
	I0830 20:26:16.102423  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHKeyPath
	I0830 20:26:16.102563  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHUsername
	I0830 20:26:16.102702  241645 main.go:141] libmachine: Using SSH client type: native
	I0830 20:26:16.103095  241645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0830 20:26:16.103112  241645 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0830 20:26:16.223742  241645 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g88b5c50-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0830 20:26:16.223862  241645 main.go:141] libmachine: found compatible host: buildroot
	I0830 20:26:16.223880  241645 main.go:141] libmachine: Provisioning with buildroot...
	I0830 20:26:16.223894  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetMachineName
	I0830 20:26:16.224192  241645 buildroot.go:166] provisioning hostname "multinode-944570-m02"
	I0830 20:26:16.224223  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetMachineName
	I0830 20:26:16.224443  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHHostname
	I0830 20:26:16.227187  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
	I0830 20:26:16.227551  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:a1:9d", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:26:09 +0000 UTC Type:0 Mac:52:54:00:c1:a1:9d Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-944570-m02 Clientid:01:52:54:00:c1:a1:9d}
	I0830 20:26:16.227600  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
	I0830 20:26:16.227744  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHPort
	I0830 20:26:16.227921  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHKeyPath
	I0830 20:26:16.228114  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHKeyPath
	I0830 20:26:16.228285  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHUsername
	I0830 20:26:16.228451  241645 main.go:141] libmachine: Using SSH client type: native
	I0830 20:26:16.228836  241645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0830 20:26:16.228849  241645 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-944570-m02 && echo "multinode-944570-m02" | sudo tee /etc/hostname
	I0830 20:26:16.363283  241645 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-944570-m02
	
	I0830 20:26:16.363331  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHHostname
	I0830 20:26:16.366075  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
	I0830 20:26:16.366444  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:a1:9d", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:26:09 +0000 UTC Type:0 Mac:52:54:00:c1:a1:9d Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-944570-m02 Clientid:01:52:54:00:c1:a1:9d}
	I0830 20:26:16.366480  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
	I0830 20:26:16.366617  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHPort
	I0830 20:26:16.366801  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHKeyPath
	I0830 20:26:16.367014  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHKeyPath
	I0830 20:26:16.367186  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHUsername
	I0830 20:26:16.367365  241645 main.go:141] libmachine: Using SSH client type: native
	I0830 20:26:16.367766  241645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0830 20:26:16.367782  241645 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-944570-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-944570-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-944570-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0830 20:26:16.493984  241645 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 20:26:16.494047  241645 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17145-222139/.minikube CaCertPath:/home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17145-222139/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17145-222139/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17145-222139/.minikube}
	I0830 20:26:16.494074  241645 buildroot.go:174] setting up certificates
	I0830 20:26:16.494088  241645 provision.go:83] configureAuth start
	I0830 20:26:16.494106  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetMachineName
	I0830 20:26:16.494400  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetIP
	I0830 20:26:16.497051  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
	I0830 20:26:16.497396  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:a1:9d", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:26:09 +0000 UTC Type:0 Mac:52:54:00:c1:a1:9d Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-944570-m02 Clientid:01:52:54:00:c1:a1:9d}
	I0830 20:26:16.497431  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
	I0830 20:26:16.497609  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHHostname
	I0830 20:26:16.499938  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
	I0830 20:26:16.500246  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:a1:9d", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:26:09 +0000 UTC Type:0 Mac:52:54:00:c1:a1:9d Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-944570-m02 Clientid:01:52:54:00:c1:a1:9d}
	I0830 20:26:16.500278  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
	I0830 20:26:16.500408  241645 provision.go:138] copyHostCerts
	I0830 20:26:16.500436  241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17145-222139/.minikube/ca.pem
	I0830 20:26:16.500464  241645 exec_runner.go:144] found /home/jenkins/minikube-integration/17145-222139/.minikube/ca.pem, removing ...
	I0830 20:26:16.500473  241645 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17145-222139/.minikube/ca.pem
	I0830 20:26:16.500564  241645 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17145-222139/.minikube/ca.pem (1082 bytes)
	I0830 20:26:16.500659  241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17145-222139/.minikube/cert.pem
	I0830 20:26:16.500682  241645 exec_runner.go:144] found /home/jenkins/minikube-integration/17145-222139/.minikube/cert.pem, removing ...
	I0830 20:26:16.500691  241645 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17145-222139/.minikube/cert.pem
	I0830 20:26:16.500737  241645 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17145-222139/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17145-222139/.minikube/cert.pem (1123 bytes)
	I0830 20:26:16.500805  241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17145-222139/.minikube/key.pem
	I0830 20:26:16.500825  241645 exec_runner.go:144] found /home/jenkins/minikube-integration/17145-222139/.minikube/key.pem, removing ...
	I0830 20:26:16.500832  241645 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17145-222139/.minikube/key.pem
	I0830 20:26:16.500865  241645 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17145-222139/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17145-222139/.minikube/key.pem (1675 bytes)
	I0830 20:26:16.500929  241645 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17145-222139/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca-key.pem org=jenkins.multinode-944570-m02 san=[192.168.39.87 192.168.39.87 localhost 127.0.0.1 minikube multinode-944570-m02]
	I0830 20:26:16.565338  241645 provision.go:172] copyRemoteCerts
	I0830 20:26:16.565392  241645 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0830 20:26:16.565419  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHHostname
	I0830 20:26:16.568036  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
	I0830 20:26:16.568397  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:a1:9d", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:26:09 +0000 UTC Type:0 Mac:52:54:00:c1:a1:9d Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-944570-m02 Clientid:01:52:54:00:c1:a1:9d}
	I0830 20:26:16.568433  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
	I0830 20:26:16.568582  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHPort
	I0830 20:26:16.568741  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHKeyPath
	I0830 20:26:16.568851  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHUsername
	I0830 20:26:16.569043  241645 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570-m02/id_rsa Username:docker}
	I0830 20:26:16.665811  241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0830 20:26:16.665872  241645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0830 20:26:16.688096  241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0830 20:26:16.688154  241645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0830 20:26:16.709910  241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0830 20:26:16.709964  241645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0830 20:26:16.732379  241645 provision.go:86] duration metric: configureAuth took 238.276272ms
	I0830 20:26:16.732406  241645 buildroot.go:189] setting minikube options for container-runtime
	I0830 20:26:16.732589  241645 config.go:182] Loaded profile config "multinode-944570": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0830 20:26:16.732614  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .DriverName
	I0830 20:26:16.732881  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHHostname
	I0830 20:26:16.735477  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
	I0830 20:26:16.735763  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:a1:9d", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:26:09 +0000 UTC Type:0 Mac:52:54:00:c1:a1:9d Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-944570-m02 Clientid:01:52:54:00:c1:a1:9d}
	I0830 20:26:16.735793  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
	I0830 20:26:16.736029  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHPort
	I0830 20:26:16.736219  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHKeyPath
	I0830 20:26:16.736412  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHKeyPath
	I0830 20:26:16.736567  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHUsername
	I0830 20:26:16.736737  241645 main.go:141] libmachine: Using SSH client type: native
	I0830 20:26:16.737237  241645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0830 20:26:16.737252  241645 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0830 20:26:16.861239  241645 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0830 20:26:16.861264  241645 buildroot.go:70] root file system type: tmpfs
	I0830 20:26:16.861378  241645 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0830 20:26:16.861395  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHHostname
	I0830 20:26:16.863937  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
	I0830 20:26:16.864240  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:a1:9d", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:26:09 +0000 UTC Type:0 Mac:52:54:00:c1:a1:9d Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-944570-m02 Clientid:01:52:54:00:c1:a1:9d}
	I0830 20:26:16.864266  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
	I0830 20:26:16.864478  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHPort
	I0830 20:26:16.864666  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHKeyPath
	I0830 20:26:16.864846  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHKeyPath
	I0830 20:26:16.864978  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHUsername
	I0830 20:26:16.865142  241645 main.go:141] libmachine: Using SSH client type: native
	I0830 20:26:16.865531  241645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0830 20:26:16.865587  241645 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.39.254"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0830 20:26:17.005103  241645 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.39.254
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0830 20:26:17.005147  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHHostname
	I0830 20:26:17.007937  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
	I0830 20:26:17.008381  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:a1:9d", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:26:09 +0000 UTC Type:0 Mac:52:54:00:c1:a1:9d Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-944570-m02 Clientid:01:52:54:00:c1:a1:9d}
	I0830 20:26:17.008415  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
	I0830 20:26:17.008571  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHPort
	I0830 20:26:17.008765  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHKeyPath
	I0830 20:26:17.008949  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHKeyPath
	I0830 20:26:17.009134  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHUsername
	I0830 20:26:17.009332  241645 main.go:141] libmachine: Using SSH client type: native
	I0830 20:26:17.009924  241645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0830 20:26:17.009946  241645 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0830 20:26:17.764718  241645 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0830 20:26:17.764756  241645 main.go:141] libmachine: Checking connection to Docker...
	I0830 20:26:17.764771  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetURL
	I0830 20:26:17.766130  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | Using libvirt version 6000000
	I0830 20:26:17.768012  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
	I0830 20:26:17.768389  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:a1:9d", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:26:09 +0000 UTC Type:0 Mac:52:54:00:c1:a1:9d Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-944570-m02 Clientid:01:52:54:00:c1:a1:9d}
	I0830 20:26:17.768437  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
	I0830 20:26:17.768601  241645 main.go:141] libmachine: Docker is up and running!
	I0830 20:26:17.768619  241645 main.go:141] libmachine: Reticulating splines...
	I0830 20:26:17.768626  241645 client.go:171] LocalClient.Create took 22.736910165s
	I0830 20:26:17.768659  241645 start.go:167] duration metric: libmachine.API.Create for "multinode-944570" took 22.737003742s
	I0830 20:26:17.768671  241645 start.go:300] post-start starting for "multinode-944570-m02" (driver="kvm2")
	I0830 20:26:17.768683  241645 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0830 20:26:17.768704  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .DriverName
	I0830 20:26:17.768965  241645 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0830 20:26:17.769001  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHHostname
	I0830 20:26:17.771493  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
	I0830 20:26:17.771869  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:a1:9d", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:26:09 +0000 UTC Type:0 Mac:52:54:00:c1:a1:9d Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-944570-m02 Clientid:01:52:54:00:c1:a1:9d}
	I0830 20:26:17.771893  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
	I0830 20:26:17.772060  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHPort
	I0830 20:26:17.772277  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHKeyPath
	I0830 20:26:17.772460  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHUsername
	I0830 20:26:17.772611  241645 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570-m02/id_rsa Username:docker}
	I0830 20:26:17.864068  241645 ssh_runner.go:195] Run: cat /etc/os-release
	I0830 20:26:17.867815  241645 command_runner.go:130] > NAME=Buildroot
	I0830 20:26:17.867843  241645 command_runner.go:130] > VERSION=2021.02.12-1-g88b5c50-dirty
	I0830 20:26:17.867849  241645 command_runner.go:130] > ID=buildroot
	I0830 20:26:17.867858  241645 command_runner.go:130] > VERSION_ID=2021.02.12
	I0830 20:26:17.867866  241645 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0830 20:26:17.867927  241645 info.go:137] Remote host: Buildroot 2021.02.12
	I0830 20:26:17.867941  241645 filesync.go:126] Scanning /home/jenkins/minikube-integration/17145-222139/.minikube/addons for local assets ...
	I0830 20:26:17.868012  241645 filesync.go:126] Scanning /home/jenkins/minikube-integration/17145-222139/.minikube/files for local assets ...
	I0830 20:26:17.868090  241645 filesync.go:149] local asset: /home/jenkins/minikube-integration/17145-222139/.minikube/files/etc/ssl/certs/2293472.pem -> 2293472.pem in /etc/ssl/certs
	I0830 20:26:17.868121  241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/files/etc/ssl/certs/2293472.pem -> /etc/ssl/certs/2293472.pem
	I0830 20:26:17.868234  241645 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0830 20:26:17.876101  241645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/files/etc/ssl/certs/2293472.pem --> /etc/ssl/certs/2293472.pem (1708 bytes)
	I0830 20:26:17.899545  241645 start.go:303] post-start completed in 130.860082ms
	I0830 20:26:17.899598  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetConfigRaw
	I0830 20:26:17.900218  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetIP
	I0830 20:26:17.902905  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
	I0830 20:26:17.903241  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:a1:9d", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:26:09 +0000 UTC Type:0 Mac:52:54:00:c1:a1:9d Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-944570-m02 Clientid:01:52:54:00:c1:a1:9d}
	I0830 20:26:17.903271  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
	I0830 20:26:17.903547  241645 profile.go:148] Saving config to /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/config.json ...
	I0830 20:26:17.903719  241645 start.go:128] duration metric: createHost completed in 22.89035769s
	I0830 20:26:17.903746  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHHostname
	I0830 20:26:17.905713  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
	I0830 20:26:17.906013  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:a1:9d", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:26:09 +0000 UTC Type:0 Mac:52:54:00:c1:a1:9d Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-944570-m02 Clientid:01:52:54:00:c1:a1:9d}
	I0830 20:26:17.906039  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
	I0830 20:26:17.906169  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHPort
	I0830 20:26:17.906363  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHKeyPath
	I0830 20:26:17.906528  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHKeyPath
	I0830 20:26:17.906650  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHUsername
	I0830 20:26:17.906822  241645 main.go:141] libmachine: Using SSH client type: native
	I0830 20:26:17.907256  241645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0830 20:26:17.907270  241645 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0830 20:26:18.035722  241645 main.go:141] libmachine: SSH cmd err, output: <nil>: 1693427178.008129069
	
	I0830 20:26:18.035749  241645 fix.go:206] guest clock: 1693427178.008129069
	I0830 20:26:18.035757  241645 fix.go:219] Guest: 2023-08-30 20:26:18.008129069 +0000 UTC Remote: 2023-08-30 20:26:17.903735593 +0000 UTC m=+99.699112165 (delta=104.393476ms)
	I0830 20:26:18.035771  241645 fix.go:190] guest clock delta is within tolerance: 104.393476ms
	I0830 20:26:18.035776  241645 start.go:83] releasing machines lock for "multinode-944570-m02", held for 23.02250006s
	I0830 20:26:18.035794  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .DriverName
	I0830 20:26:18.036095  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetIP
	I0830 20:26:18.038762  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
	I0830 20:26:18.039123  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:a1:9d", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:26:09 +0000 UTC Type:0 Mac:52:54:00:c1:a1:9d Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-944570-m02 Clientid:01:52:54:00:c1:a1:9d}
	I0830 20:26:18.039159  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
	I0830 20:26:18.041547  241645 out.go:177] * Found network options:
	I0830 20:26:18.043026  241645 out.go:177]   - NO_PROXY=192.168.39.254
	W0830 20:26:18.044413  241645 proxy.go:119] fail to check proxy env: Error ip not in block
	I0830 20:26:18.044459  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .DriverName
	I0830 20:26:18.045019  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .DriverName
	I0830 20:26:18.045186  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .DriverName
	I0830 20:26:18.045276  241645 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0830 20:26:18.045314  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHHostname
	W0830 20:26:18.045393  241645 proxy.go:119] fail to check proxy env: Error ip not in block
	I0830 20:26:18.045464  241645 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0830 20:26:18.045479  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHHostname
	I0830 20:26:18.048117  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
	I0830 20:26:18.048173  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
	I0830 20:26:18.048497  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:a1:9d", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:26:09 +0000 UTC Type:0 Mac:52:54:00:c1:a1:9d Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-944570-m02 Clientid:01:52:54:00:c1:a1:9d}
	I0830 20:26:18.048518  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
	I0830 20:26:18.048543  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:a1:9d", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:26:09 +0000 UTC Type:0 Mac:52:54:00:c1:a1:9d Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-944570-m02 Clientid:01:52:54:00:c1:a1:9d}
	I0830 20:26:18.048557  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
	I0830 20:26:18.048717  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHPort
	I0830 20:26:18.048852  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHPort
	I0830 20:26:18.048923  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHKeyPath
	I0830 20:26:18.049033  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHKeyPath
	I0830 20:26:18.049133  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHUsername
	I0830 20:26:18.049195  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHUsername
	I0830 20:26:18.049297  241645 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570-m02/id_rsa Username:docker}
	I0830 20:26:18.049441  241645 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570-m02/id_rsa Username:docker}
	I0830 20:26:18.167831  241645 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0830 20:26:18.168666  241645 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0830 20:26:18.168704  241645 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0830 20:26:18.168758  241645 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0830 20:26:18.182323  241645 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0830 20:26:18.182557  241645 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0830 20:26:18.182578  241645 start.go:466] detecting cgroup driver to use...
	I0830 20:26:18.182699  241645 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0830 20:26:18.198089  241645 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0830 20:26:18.198494  241645 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0830 20:26:18.207229  241645 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0830 20:26:18.216468  241645 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0830 20:26:18.216541  241645 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0830 20:26:18.225253  241645 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0830 20:26:18.233992  241645 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0830 20:26:18.242668  241645 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0830 20:26:18.251139  241645 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0830 20:26:18.260192  241645 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0830 20:26:18.268775  241645 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0830 20:26:18.276482  241645 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0830 20:26:18.276560  241645 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0830 20:26:18.284162  241645 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0830 20:26:18.382328  241645 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0830 20:26:18.398015  241645 start.go:466] detecting cgroup driver to use...
	I0830 20:26:18.398117  241645 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0830 20:26:18.411951  241645 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0830 20:26:18.411969  241645 command_runner.go:130] > [Unit]
	I0830 20:26:18.411975  241645 command_runner.go:130] > Description=Docker Application Container Engine
	I0830 20:26:18.411981  241645 command_runner.go:130] > Documentation=https://docs.docker.com
	I0830 20:26:18.411986  241645 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0830 20:26:18.411991  241645 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0830 20:26:18.411996  241645 command_runner.go:130] > StartLimitBurst=3
	I0830 20:26:18.412000  241645 command_runner.go:130] > StartLimitIntervalSec=60
	I0830 20:26:18.412005  241645 command_runner.go:130] > [Service]
	I0830 20:26:18.412008  241645 command_runner.go:130] > Type=notify
	I0830 20:26:18.412013  241645 command_runner.go:130] > Restart=on-failure
	I0830 20:26:18.412025  241645 command_runner.go:130] > Environment=NO_PROXY=192.168.39.254
	I0830 20:26:18.412035  241645 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0830 20:26:18.412049  241645 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0830 20:26:18.412064  241645 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0830 20:26:18.412075  241645 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0830 20:26:18.412083  241645 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0830 20:26:18.412090  241645 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0830 20:26:18.412098  241645 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0830 20:26:18.412107  241645 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0830 20:26:18.412117  241645 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0830 20:26:18.412121  241645 command_runner.go:130] > ExecStart=
	I0830 20:26:18.412135  241645 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I0830 20:26:18.412145  241645 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0830 20:26:18.412156  241645 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0830 20:26:18.412172  241645 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0830 20:26:18.412182  241645 command_runner.go:130] > LimitNOFILE=infinity
	I0830 20:26:18.412188  241645 command_runner.go:130] > LimitNPROC=infinity
	I0830 20:26:18.412200  241645 command_runner.go:130] > LimitCORE=infinity
	I0830 20:26:18.412210  241645 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0830 20:26:18.412221  241645 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0830 20:26:18.412227  241645 command_runner.go:130] > TasksMax=infinity
	I0830 20:26:18.412232  241645 command_runner.go:130] > TimeoutStartSec=0
	I0830 20:26:18.412241  241645 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0830 20:26:18.412245  241645 command_runner.go:130] > Delegate=yes
	I0830 20:26:18.412253  241645 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0830 20:26:18.412261  241645 command_runner.go:130] > KillMode=process
	I0830 20:26:18.412267  241645 command_runner.go:130] > [Install]
	I0830 20:26:18.412271  241645 command_runner.go:130] > WantedBy=multi-user.target
	I0830 20:26:18.412327  241645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0830 20:26:18.424774  241645 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0830 20:26:18.444173  241645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0830 20:26:18.457853  241645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0830 20:26:18.469785  241645 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0830 20:26:18.503424  241645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0830 20:26:18.516289  241645 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0830 20:26:18.534273  241645 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0830 20:26:18.534348  241645 ssh_runner.go:195] Run: which cri-dockerd
	I0830 20:26:18.537674  241645 command_runner.go:130] > /usr/bin/cri-dockerd
	I0830 20:26:18.537957  241645 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0830 20:26:18.547927  241645 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0830 20:26:18.563554  241645 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0830 20:26:18.668812  241645 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0830 20:26:18.771145  241645 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0830 20:26:18.771175  241645 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0830 20:26:18.786831  241645 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0830 20:26:18.896129  241645 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0830 20:26:20.250484  241645 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.354319055s)
	I0830 20:26:20.250549  241645 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0830 20:26:20.354304  241645 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0830 20:26:20.458215  241645 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0830 20:26:20.558505  241645 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0830 20:26:20.656554  241645 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0830 20:26:20.670952  241645 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0830 20:26:20.775121  241645 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0830 20:26:20.851467  241645 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0830 20:26:20.851558  241645 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0830 20:26:20.856769  241645 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0830 20:26:20.856792  241645 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0830 20:26:20.856798  241645 command_runner.go:130] > Device: 16h/22d	Inode: 947         Links: 1
	I0830 20:26:20.856805  241645 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0830 20:26:20.856815  241645 command_runner.go:130] > Access: 2023-08-30 20:26:20.762615498 +0000
	I0830 20:26:20.856820  241645 command_runner.go:130] > Modify: 2023-08-30 20:26:20.762615498 +0000
	I0830 20:26:20.856824  241645 command_runner.go:130] > Change: 2023-08-30 20:26:20.765619781 +0000
	I0830 20:26:20.856828  241645 command_runner.go:130] >  Birth: -
	I0830 20:26:20.856846  241645 start.go:534] Will wait 60s for crictl version
	I0830 20:26:20.856897  241645 ssh_runner.go:195] Run: which crictl
	I0830 20:26:20.861731  241645 command_runner.go:130] > /usr/bin/crictl
	I0830 20:26:20.862160  241645 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0830 20:26:20.898717  241645 command_runner.go:130] > Version:  0.1.0
	I0830 20:26:20.898750  241645 command_runner.go:130] > RuntimeName:  docker
	I0830 20:26:20.898758  241645 command_runner.go:130] > RuntimeVersion:  24.0.5
	I0830 20:26:20.898767  241645 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0830 20:26:20.898792  241645 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.5
	RuntimeApiVersion:  v1alpha2
	I0830 20:26:20.898853  241645 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0830 20:26:20.926751  241645 command_runner.go:130] > 24.0.5
	I0830 20:26:20.927803  241645 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0830 20:26:20.953389  241645 command_runner.go:130] > 24.0.5
	I0830 20:26:20.957055  241645 out.go:204] * Preparing Kubernetes v1.28.1 on Docker 24.0.5 ...
	I0830 20:26:20.958479  241645 out.go:177]   - env NO_PROXY=192.168.39.254
	I0830 20:26:20.959809  241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetIP
	I0830 20:26:20.962454  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
	I0830 20:26:20.962850  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:a1:9d", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:26:09 +0000 UTC Type:0 Mac:52:54:00:c1:a1:9d Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-944570-m02 Clientid:01:52:54:00:c1:a1:9d}
	I0830 20:26:20.962894  241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
	I0830 20:26:20.963067  241645 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0830 20:26:20.966820  241645 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 20:26:20.978285  241645 certs.go:56] Setting up /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570 for IP: 192.168.39.87
	I0830 20:26:20.978318  241645 certs.go:190] acquiring lock for shared ca certs: {Name:mk1ac5fe312bfdaa0e7afaffac50c875afeaeaed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 20:26:20.978453  241645 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17145-222139/.minikube/ca.key
	I0830 20:26:20.978494  241645 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17145-222139/.minikube/proxy-client-ca.key
	I0830 20:26:20.978507  241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0830 20:26:20.978528  241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0830 20:26:20.978544  241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0830 20:26:20.978558  241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0830 20:26:20.978625  241645 certs.go:437] found cert: /home/jenkins/minikube-integration/17145-222139/.minikube/certs/home/jenkins/minikube-integration/17145-222139/.minikube/certs/229347.pem (1338 bytes)
	W0830 20:26:20.978663  241645 certs.go:433] ignoring /home/jenkins/minikube-integration/17145-222139/.minikube/certs/home/jenkins/minikube-integration/17145-222139/.minikube/certs/229347_empty.pem, impossibly tiny 0 bytes
	I0830 20:26:20.978679  241645 certs.go:437] found cert: /home/jenkins/minikube-integration/17145-222139/.minikube/certs/home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca-key.pem (1679 bytes)
	I0830 20:26:20.978716  241645 certs.go:437] found cert: /home/jenkins/minikube-integration/17145-222139/.minikube/certs/home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca.pem (1082 bytes)
	I0830 20:26:20.978746  241645 certs.go:437] found cert: /home/jenkins/minikube-integration/17145-222139/.minikube/certs/home/jenkins/minikube-integration/17145-222139/.minikube/certs/cert.pem (1123 bytes)
	I0830 20:26:20.978779  241645 certs.go:437] found cert: /home/jenkins/minikube-integration/17145-222139/.minikube/certs/home/jenkins/minikube-integration/17145-222139/.minikube/certs/key.pem (1675 bytes)
	I0830 20:26:20.978830  241645 certs.go:437] found cert: /home/jenkins/minikube-integration/17145-222139/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17145-222139/.minikube/files/etc/ssl/certs/2293472.pem (1708 bytes)
	I0830 20:26:20.978866  241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0830 20:26:20.978885  241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/certs/229347.pem -> /usr/share/ca-certificates/229347.pem
	I0830 20:26:20.978901  241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/files/etc/ssl/certs/2293472.pem -> /usr/share/ca-certificates/2293472.pem
	I0830 20:26:20.979383  241645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0830 20:26:21.000348  241645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0830 20:26:21.020959  241645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0830 20:26:21.041729  241645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0830 20:26:21.063373  241645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0830 20:26:21.085100  241645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/certs/229347.pem --> /usr/share/ca-certificates/229347.pem (1338 bytes)
	I0830 20:26:21.106314  241645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/files/etc/ssl/certs/2293472.pem --> /usr/share/ca-certificates/2293472.pem (1708 bytes)
	I0830 20:26:21.126996  241645 ssh_runner.go:195] Run: openssl version
	I0830 20:26:21.131711  241645 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0830 20:26:21.132008  241645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2293472.pem && ln -fs /usr/share/ca-certificates/2293472.pem /etc/ssl/certs/2293472.pem"
	I0830 20:26:21.140915  241645 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2293472.pem
	I0830 20:26:21.144851  241645 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 30 20:12 /usr/share/ca-certificates/2293472.pem
	I0830 20:26:21.145021  241645 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 30 20:12 /usr/share/ca-certificates/2293472.pem
	I0830 20:26:21.145070  241645 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2293472.pem
	I0830 20:26:21.149881  241645 command_runner.go:130] > 3ec20f2e
	I0830 20:26:21.149986  241645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2293472.pem /etc/ssl/certs/3ec20f2e.0"
	I0830 20:26:21.158319  241645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0830 20:26:21.166602  241645 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0830 20:26:21.170509  241645 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 30 20:06 /usr/share/ca-certificates/minikubeCA.pem
	I0830 20:26:21.170535  241645 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 30 20:06 /usr/share/ca-certificates/minikubeCA.pem
	I0830 20:26:21.170571  241645 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0830 20:26:21.175250  241645 command_runner.go:130] > b5213941
	I0830 20:26:21.175466  241645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0830 20:26:21.184136  241645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/229347.pem && ln -fs /usr/share/ca-certificates/229347.pem /etc/ssl/certs/229347.pem"
	I0830 20:26:21.192400  241645 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/229347.pem
	I0830 20:26:21.196494  241645 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 30 20:12 /usr/share/ca-certificates/229347.pem
	I0830 20:26:21.196518  241645 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 30 20:12 /usr/share/ca-certificates/229347.pem
	I0830 20:26:21.196567  241645 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/229347.pem
	I0830 20:26:21.201292  241645 command_runner.go:130] > 51391683
	I0830 20:26:21.201569  241645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/229347.pem /etc/ssl/certs/51391683.0"
	I0830 20:26:21.209807  241645 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0830 20:26:21.213600  241645 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0830 20:26:21.213633  241645 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0830 20:26:21.213698  241645 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0830 20:26:21.236481  241645 command_runner.go:130] > cgroupfs
	I0830 20:26:21.237455  241645 cni.go:84] Creating CNI manager for ""
	I0830 20:26:21.237472  241645 cni.go:136] 2 nodes found, recommending kindnet
	I0830 20:26:21.237485  241645 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0830 20:26:21.237507  241645 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.87 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-944570 NodeName:multinode-944570-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.254"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.87 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0830 20:26:21.237680  241645 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.87
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-944570-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.87
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.254"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0830 20:26:21.237759  241645 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-944570-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.87
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:multinode-944570 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0830 20:26:21.237819  241645 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0830 20:26:21.245688  241645 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.28.1': No such file or directory
	I0830 20:26:21.245725  241645 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.1': No such file or directory
	
	Initiating transfer...
	I0830 20:26:21.246145  241645 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.1
	I0830 20:26:21.255257  241645 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubectl.sha256
	I0830 20:26:21.255283  241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/cache/linux/amd64/v1.28.1/kubectl -> /var/lib/minikube/binaries/v1.28.1/kubectl
	I0830 20:26:21.255368  241645 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.1/kubectl
	I0830 20:26:21.255385  241645 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/17145-222139/.minikube/cache/linux/amd64/v1.28.1/kubelet
	I0830 20:26:21.255411  241645 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/17145-222139/.minikube/cache/linux/amd64/v1.28.1/kubeadm
	I0830 20:26:21.260026  241645 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.1/kubectl': No such file or directory
	I0830 20:26:21.260237  241645 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.1/kubectl': No such file or directory
	I0830 20:26:21.260263  241645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/cache/linux/amd64/v1.28.1/kubectl --> /var/lib/minikube/binaries/v1.28.1/kubectl (49864704 bytes)
	I0830 20:26:23.412564  241645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 20:26:23.425559  241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/cache/linux/amd64/v1.28.1/kubelet -> /var/lib/minikube/binaries/v1.28.1/kubelet
	I0830 20:26:23.425645  241645 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.1/kubelet
	I0830 20:26:23.429518  241645 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.1/kubelet': No such file or directory
	I0830 20:26:23.429589  241645 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.1/kubelet': No such file or directory
	I0830 20:26:23.429620  241645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/cache/linux/amd64/v1.28.1/kubelet --> /var/lib/minikube/binaries/v1.28.1/kubelet (110764032 bytes)
	I0830 20:26:46.030696  241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/cache/linux/amd64/v1.28.1/kubeadm -> /var/lib/minikube/binaries/v1.28.1/kubeadm
	I0830 20:26:46.030775  241645 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.1/kubeadm
	I0830 20:26:46.035417  241645 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.1/kubeadm': No such file or directory
	I0830 20:26:46.035663  241645 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.1/kubeadm': No such file or directory
	I0830 20:26:46.035707  241645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/cache/linux/amd64/v1.28.1/kubeadm --> /var/lib/minikube/binaries/v1.28.1/kubeadm (50749440 bytes)
	I0830 20:26:46.260730  241645 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0830 20:26:46.268460  241645 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0830 20:26:46.282326  241645 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0830 20:26:46.296262  241645 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0830 20:26:46.299948  241645 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 20:26:46.310073  241645 host.go:66] Checking if "multinode-944570" exists ...
	I0830 20:26:46.310367  241645 config.go:182] Loaded profile config "multinode-944570": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0830 20:26:46.310509  241645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0830 20:26:46.310555  241645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 20:26:46.325256  241645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35065
	I0830 20:26:46.325684  241645 main.go:141] libmachine: () Calling .GetVersion
	I0830 20:26:46.326182  241645 main.go:141] libmachine: Using API Version  1
	I0830 20:26:46.326204  241645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 20:26:46.326525  241645 main.go:141] libmachine: () Calling .GetMachineName
	I0830 20:26:46.326698  241645 main.go:141] libmachine: (multinode-944570) Calling .DriverName
	I0830 20:26:46.326854  241645 start.go:301] JoinCluster: &{Name:multinode-944570 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.1 ClusterName:multinode-944570 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.254 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.87 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true
ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 20:26:46.326982  241645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0830 20:26:46.326998  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHHostname
	I0830 20:26:46.329653  241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:26:46.330088  241645 main.go:141] libmachine: (multinode-944570) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:42:84", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:24:53 +0000 UTC Type:0 Mac:52:54:00:50:42:84 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-944570 Clientid:01:52:54:00:50:42:84}
	I0830 20:26:46.330111  241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined IP address 192.168.39.254 and MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:26:46.330267  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHPort
	I0830 20:26:46.330439  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHKeyPath
	I0830 20:26:46.330612  241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHUsername
	I0830 20:26:46.330753  241645 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570/id_rsa Username:docker}
	I0830 20:26:46.532641  241645 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token e0xxtf.j38sb4ogstadzdh0 --discovery-token-ca-cert-hash sha256:c7d83bf61acd2074da49416c1394017fb833fac06001902ce7698890024b9ad6 
	I0830 20:26:46.535817  241645 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.87 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0830 20:26:46.535930  241645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token e0xxtf.j38sb4ogstadzdh0 --discovery-token-ca-cert-hash sha256:c7d83bf61acd2074da49416c1394017fb833fac06001902ce7698890024b9ad6 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-944570-m02"
	I0830 20:26:46.581983  241645 command_runner.go:130] ! W0830 20:26:46.573788    1161 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0830 20:26:46.718592  241645 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0830 20:26:49.398181  241645 command_runner.go:130] > [preflight] Running pre-flight checks
	I0830 20:26:49.398208  241645 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0830 20:26:49.398222  241645 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0830 20:26:49.398232  241645 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0830 20:26:49.398243  241645 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0830 20:26:49.398250  241645 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0830 20:26:49.398260  241645 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0830 20:26:49.398268  241645 command_runner.go:130] > This node has joined the cluster:
	I0830 20:26:49.398279  241645 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0830 20:26:49.398298  241645 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0830 20:26:49.398310  241645 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0830 20:26:49.398336  241645 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token e0xxtf.j38sb4ogstadzdh0 --discovery-token-ca-cert-hash sha256:c7d83bf61acd2074da49416c1394017fb833fac06001902ce7698890024b9ad6 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-944570-m02": (2.862384316s)
	I0830 20:26:49.398362  241645 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0830 20:26:49.650865  241645 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0830 20:26:49.650918  241645 start.go:303] JoinCluster complete in 3.324064662s
	I0830 20:26:49.650935  241645 cni.go:84] Creating CNI manager for ""
	I0830 20:26:49.650942  241645 cni.go:136] 2 nodes found, recommending kindnet
	I0830 20:26:49.651007  241645 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0830 20:26:49.655973  241645 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0830 20:26:49.655992  241645 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0830 20:26:49.655999  241645 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0830 20:26:49.656005  241645 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0830 20:26:49.656010  241645 command_runner.go:130] > Access: 2023-08-30 20:24:50.661585107 +0000
	I0830 20:26:49.656015  241645 command_runner.go:130] > Modify: 2023-08-24 15:47:28.000000000 +0000
	I0830 20:26:49.656019  241645 command_runner.go:130] > Change: 2023-08-30 20:24:48.918585107 +0000
	I0830 20:26:49.656023  241645 command_runner.go:130] >  Birth: -
	I0830 20:26:49.656323  241645 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
	I0830 20:26:49.656346  241645 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0830 20:26:49.672810  241645 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0830 20:26:49.998917  241645 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0830 20:26:50.000674  241645 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0830 20:26:50.002927  241645 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0830 20:26:50.015310  241645 command_runner.go:130] > daemonset.apps/kindnet configured
	I0830 20:26:50.021109  241645 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17145-222139/kubeconfig
	I0830 20:26:50.021511  241645 kapi.go:59] client config for multinode-944570: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/client.crt", KeyFile:"/home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/client.key", CAFile:"/home/jenkins/minikube-integration/17145-222139/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d63c60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0830 20:26:50.022036  241645 round_trippers.go:463] GET https://192.168.39.254:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0830 20:26:50.022050  241645 round_trippers.go:469] Request Headers:
	I0830 20:26:50.022061  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:26:50.022071  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:26:50.024458  241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 20:26:50.024481  241645 round_trippers.go:577] Response Headers:
	I0830 20:26:50.024490  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:26:50.024499  241645 round_trippers.go:580]     Content-Length: 291
	I0830 20:26:50.024515  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:26:50 GMT
	I0830 20:26:50.024524  241645 round_trippers.go:580]     Audit-Id: f753ef77-215a-4bc7-8333-2baa348d3313
	I0830 20:26:50.024533  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:26:50.024543  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:26:50.024555  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:26:50.024583  241645 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c7c6dc1f-7aa3-4c43-b4bf-a6ffaa653be3","resourceVersion":"457","creationTimestamp":"2023-08-30T20:25:25Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0830 20:26:50.024675  241645 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-944570" context rescaled to 1 replicas
	I0830 20:26:50.024702  241645 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.87 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0830 20:26:50.027480  241645 out.go:177] * Verifying Kubernetes components...
	I0830 20:26:50.029053  241645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 20:26:50.042408  241645 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17145-222139/kubeconfig
	I0830 20:26:50.042646  241645 kapi.go:59] client config for multinode-944570: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/client.crt", KeyFile:"/home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/client.key", CAFile:"/home/jenkins/minikube-integration/17145-222139/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d63c60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0830 20:26:50.042896  241645 node_ready.go:35] waiting up to 6m0s for node "multinode-944570-m02" to be "Ready" ...
	I0830 20:26:50.042975  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570-m02
	I0830 20:26:50.042991  241645 round_trippers.go:469] Request Headers:
	I0830 20:26:50.043002  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:26:50.043010  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:26:50.047566  241645 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0830 20:26:50.047588  241645 round_trippers.go:577] Response Headers:
	I0830 20:26:50.047598  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:26:50.047607  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:26:50.047615  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:26:50.047624  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:26:50.047637  241645 round_trippers.go:580]     Content-Length: 3484
	I0830 20:26:50.047647  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:26:50 GMT
	I0830 20:26:50.047658  241645 round_trippers.go:580]     Audit-Id: f545e445-776e-4594-b0cb-1b5d714a44a6
	I0830 20:26:50.047825  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570-m02","uid":"877d6c33-8fe2-4387-b833-27d6785aebbb","resourceVersion":"528","creationTimestamp":"2023-08-30T20:26:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 2460 chars]
	I0830 20:26:50.048142  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570-m02
	I0830 20:26:50.048157  241645 round_trippers.go:469] Request Headers:
	I0830 20:26:50.048166  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:26:50.048176  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:26:50.050951  241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 20:26:50.050971  241645 round_trippers.go:577] Response Headers:
	I0830 20:26:50.050980  241645 round_trippers.go:580]     Audit-Id: ed642158-7514-4759-88be-d27c59dcf46d
	I0830 20:26:50.050989  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:26:50.050997  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:26:50.051008  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:26:50.051023  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:26:50.051032  241645 round_trippers.go:580]     Content-Length: 3484
	I0830 20:26:50.051044  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:26:50 GMT
	I0830 20:26:50.051141  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570-m02","uid":"877d6c33-8fe2-4387-b833-27d6785aebbb","resourceVersion":"528","creationTimestamp":"2023-08-30T20:26:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 2460 chars]
	I0830 20:26:50.552170  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570-m02
	I0830 20:26:50.552193  241645 round_trippers.go:469] Request Headers:
	I0830 20:26:50.552202  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:26:50.552208  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:26:50.556706  241645 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0830 20:26:50.556723  241645 round_trippers.go:577] Response Headers:
	I0830 20:26:50.556730  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:26:50.556738  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:26:50.556747  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:26:50.556755  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:26:50.556765  241645 round_trippers.go:580]     Content-Length: 3484
	I0830 20:26:50.556775  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:26:50 GMT
	I0830 20:26:50.556781  241645 round_trippers.go:580]     Audit-Id: 4b584d9b-b0d6-40a6-89ca-9e81ff84fb23
	I0830 20:26:50.557226  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570-m02","uid":"877d6c33-8fe2-4387-b833-27d6785aebbb","resourceVersion":"528","creationTimestamp":"2023-08-30T20:26:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 2460 chars]
	I0830 20:26:51.051915  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570-m02
	I0830 20:26:51.051946  241645 round_trippers.go:469] Request Headers:
	I0830 20:26:51.051956  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:26:51.051964  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:26:51.054545  241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 20:26:51.054565  241645 round_trippers.go:577] Response Headers:
	I0830 20:26:51.054574  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:26:51.054583  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:26:51.054591  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:26:51.054596  241645 round_trippers.go:580]     Content-Length: 3484
	I0830 20:26:51.054602  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:26:51 GMT
	I0830 20:26:51.054607  241645 round_trippers.go:580]     Audit-Id: 38946057-f7d9-410f-8656-feba1211a3f3
	I0830 20:26:51.054616  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:26:51.054704  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570-m02","uid":"877d6c33-8fe2-4387-b833-27d6785aebbb","resourceVersion":"528","creationTimestamp":"2023-08-30T20:26:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 2460 chars]
	I0830 20:26:51.551802  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570-m02
	I0830 20:26:51.551830  241645 round_trippers.go:469] Request Headers:
	I0830 20:26:51.551838  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:26:51.551845  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:26:51.556087  241645 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0830 20:26:51.556110  241645 round_trippers.go:577] Response Headers:
	I0830 20:26:51.556118  241645 round_trippers.go:580]     Audit-Id: d6ae2869-80ed-40a6-bcaa-d19b50a839f7
	I0830 20:26:51.556123  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:26:51.556128  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:26:51.556133  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:26:51.556139  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:26:51.556144  241645 round_trippers.go:580]     Content-Length: 3484
	I0830 20:26:51.556149  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:26:51 GMT
	I0830 20:26:51.556322  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570-m02","uid":"877d6c33-8fe2-4387-b833-27d6785aebbb","resourceVersion":"528","creationTimestamp":"2023-08-30T20:26:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 2460 chars]
	I0830 20:26:52.052557  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570-m02
	I0830 20:26:52.052581  241645 round_trippers.go:469] Request Headers:
	I0830 20:26:52.052590  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:26:52.052599  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:26:52.056721  241645 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0830 20:26:52.056743  241645 round_trippers.go:577] Response Headers:
	I0830 20:26:52.056750  241645 round_trippers.go:580]     Content-Length: 3484
	I0830 20:26:52.056756  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:26:52 GMT
	I0830 20:26:52.056761  241645 round_trippers.go:580]     Audit-Id: 4f3fb346-f49f-4998-8168-9486d0c18545
	I0830 20:26:52.056766  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:26:52.056772  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:26:52.056778  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:26:52.056787  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:26:52.057058  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570-m02","uid":"877d6c33-8fe2-4387-b833-27d6785aebbb","resourceVersion":"528","creationTimestamp":"2023-08-30T20:26:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 2460 chars]
	I0830 20:26:52.057325  241645 node_ready.go:58] node "multinode-944570-m02" has status "Ready":"False"
	I0830 20:26:52.552631  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570-m02
	I0830 20:26:52.552665  241645 round_trippers.go:469] Request Headers:
	I0830 20:26:52.552677  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:26:52.552687  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:26:52.555837  241645 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 20:26:52.555873  241645 round_trippers.go:577] Response Headers:
	I0830 20:26:52.555885  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:26:52.555893  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:26:52.555901  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:26:52.555910  241645 round_trippers.go:580]     Content-Length: 3484
	I0830 20:26:52.555919  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:26:52 GMT
	I0830 20:26:52.555932  241645 round_trippers.go:580]     Audit-Id: 13c81e07-ab24-44d5-ab02-c1a7e5a08e0a
	I0830 20:26:52.555941  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:26:52.556055  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570-m02","uid":"877d6c33-8fe2-4387-b833-27d6785aebbb","resourceVersion":"528","creationTimestamp":"2023-08-30T20:26:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 2460 chars]
	I0830 20:26:53.052568  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570-m02
	I0830 20:26:53.052591  241645 round_trippers.go:469] Request Headers:
	I0830 20:26:53.052599  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:26:53.052606  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:26:53.055367  241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 20:26:53.055394  241645 round_trippers.go:577] Response Headers:
	I0830 20:26:53.055406  241645 round_trippers.go:580]     Audit-Id: 0a5e66d8-3ae4-4c87-b380-a7fac3c498f9
	I0830 20:26:53.055415  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:26:53.055423  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:26:53.055433  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:26:53.055445  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:26:53.055454  241645 round_trippers.go:580]     Content-Length: 3484
	I0830 20:26:53.055463  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:26:53 GMT
	I0830 20:26:53.055540  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570-m02","uid":"877d6c33-8fe2-4387-b833-27d6785aebbb","resourceVersion":"528","creationTimestamp":"2023-08-30T20:26:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 2460 chars]
	I0830 20:26:53.551917  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570-m02
	I0830 20:26:53.551947  241645 round_trippers.go:469] Request Headers:
	I0830 20:26:53.551960  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:26:53.551970  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:26:53.554596  241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 20:26:53.554614  241645 round_trippers.go:577] Response Headers:
	I0830 20:26:53.554621  241645 round_trippers.go:580]     Content-Length: 3484
	I0830 20:26:53.554627  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:26:53 GMT
	I0830 20:26:53.554632  241645 round_trippers.go:580]     Audit-Id: dc9cb34b-197c-4d5f-8b08-ca7aa76218a4
	I0830 20:26:53.554642  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:26:53.554651  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:26:53.554660  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:26:53.554670  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:26:53.554754  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570-m02","uid":"877d6c33-8fe2-4387-b833-27d6785aebbb","resourceVersion":"528","creationTimestamp":"2023-08-30T20:26:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 2460 chars]
	I0830 20:26:54.052437  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570-m02
	I0830 20:26:54.052460  241645 round_trippers.go:469] Request Headers:
	I0830 20:26:54.052475  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:26:54.052481  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:26:54.055696  241645 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 20:26:54.055718  241645 round_trippers.go:577] Response Headers:
	I0830 20:26:54.055726  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:26:54.055732  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:26:54.055738  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:26:54.055743  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:26:54.055749  241645 round_trippers.go:580]     Content-Length: 3593
	I0830 20:26:54.055754  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:26:54 GMT
	I0830 20:26:54.055761  241645 round_trippers.go:580]     Audit-Id: 0bbdc749-b427-48d9-b91c-a2f80d9d6182
	I0830 20:26:54.055855  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570-m02","uid":"877d6c33-8fe2-4387-b833-27d6785aebbb","resourceVersion":"536","creationTimestamp":"2023-08-30T20:26:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2569 chars]
	I0830 20:26:54.551952  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570-m02
	I0830 20:26:54.551977  241645 round_trippers.go:469] Request Headers:
	I0830 20:26:54.551985  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:26:54.551991  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:26:54.555422  241645 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 20:26:54.555440  241645 round_trippers.go:577] Response Headers:
	I0830 20:26:54.555447  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:26:54 GMT
	I0830 20:26:54.555454  241645 round_trippers.go:580]     Audit-Id: 18e4ae8e-28d1-49d2-a122-d99ce6a50b26
	I0830 20:26:54.555468  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:26:54.555479  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:26:54.555490  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:26:54.555502  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:26:54.555510  241645 round_trippers.go:580]     Content-Length: 3593
	I0830 20:26:54.555601  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570-m02","uid":"877d6c33-8fe2-4387-b833-27d6785aebbb","resourceVersion":"536","creationTimestamp":"2023-08-30T20:26:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2569 chars]
	I0830 20:26:54.555925  241645 node_ready.go:58] node "multinode-944570-m02" has status "Ready":"False"
	I0830 20:26:55.051601  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570-m02
	I0830 20:26:55.051626  241645 round_trippers.go:469] Request Headers:
	I0830 20:26:55.051640  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:26:55.051650  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:26:55.054516  241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 20:26:55.054537  241645 round_trippers.go:577] Response Headers:
	I0830 20:26:55.054549  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:26:55.054556  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:26:55.054565  241645 round_trippers.go:580]     Content-Length: 3593
	I0830 20:26:55.054575  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:26:55 GMT
	I0830 20:26:55.054586  241645 round_trippers.go:580]     Audit-Id: 974f116d-0216-4762-9183-1015e51e5e3d
	I0830 20:26:55.054596  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:26:55.054606  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:26:55.054713  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570-m02","uid":"877d6c33-8fe2-4387-b833-27d6785aebbb","resourceVersion":"536","creationTimestamp":"2023-08-30T20:26:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2569 chars]
	I0830 20:26:55.551935  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570-m02
	I0830 20:26:55.551961  241645 round_trippers.go:469] Request Headers:
	I0830 20:26:55.551974  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:26:55.551982  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:26:55.554727  241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 20:26:55.554755  241645 round_trippers.go:577] Response Headers:
	I0830 20:26:55.554766  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:26:55.554775  241645 round_trippers.go:580]     Content-Length: 3593
	I0830 20:26:55.554783  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:26:55 GMT
	I0830 20:26:55.554791  241645 round_trippers.go:580]     Audit-Id: 5105a81f-7ebf-4f95-94e4-d96ad38601ce
	I0830 20:26:55.554800  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:26:55.554810  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:26:55.554820  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:26:55.554913  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570-m02","uid":"877d6c33-8fe2-4387-b833-27d6785aebbb","resourceVersion":"536","creationTimestamp":"2023-08-30T20:26:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2569 chars]
	I0830 20:26:56.052573  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570-m02
	I0830 20:26:56.052604  241645 round_trippers.go:469] Request Headers:
	I0830 20:26:56.052625  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:26:56.052634  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:26:56.055454  241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 20:26:56.055487  241645 round_trippers.go:577] Response Headers:
	I0830 20:26:56.055500  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:26:56.055509  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:26:56.055523  241645 round_trippers.go:580]     Content-Length: 3593
	I0830 20:26:56.055532  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:26:56 GMT
	I0830 20:26:56.055545  241645 round_trippers.go:580]     Audit-Id: 2ebe5a02-ac79-4615-92d3-0611eda8efb2
	I0830 20:26:56.055556  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:26:56.055567  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:26:56.055629  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570-m02","uid":"877d6c33-8fe2-4387-b833-27d6785aebbb","resourceVersion":"536","creationTimestamp":"2023-08-30T20:26:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2569 chars]
	I0830 20:26:56.551867  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570-m02
	I0830 20:26:56.551894  241645 round_trippers.go:469] Request Headers:
	I0830 20:26:56.551905  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:26:56.551913  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:26:56.554437  241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 20:26:56.554469  241645 round_trippers.go:577] Response Headers:
	I0830 20:26:56.554482  241645 round_trippers.go:580]     Audit-Id: 32c6b61d-1015-42d4-9f3c-250479cf0464
	I0830 20:26:56.554491  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:26:56.554504  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:26:56.554514  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:26:56.554523  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:26:56.554533  241645 round_trippers.go:580]     Content-Length: 3593
	I0830 20:26:56.554542  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:26:56 GMT
	I0830 20:26:56.554659  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570-m02","uid":"877d6c33-8fe2-4387-b833-27d6785aebbb","resourceVersion":"536","creationTimestamp":"2023-08-30T20:26:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2569 chars]
	I0830 20:26:57.052026  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570-m02
	I0830 20:26:57.052052  241645 round_trippers.go:469] Request Headers:
	I0830 20:26:57.052061  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:26:57.052067  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:26:57.055201  241645 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 20:26:57.055229  241645 round_trippers.go:577] Response Headers:
	I0830 20:26:57.055239  241645 round_trippers.go:580]     Content-Length: 3593
	I0830 20:26:57.055248  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:26:57 GMT
	I0830 20:26:57.055257  241645 round_trippers.go:580]     Audit-Id: 8a7fd742-f25f-40ea-b9ac-46606095e66c
	I0830 20:26:57.055266  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:26:57.055275  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:26:57.055286  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:26:57.055311  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:26:57.055408  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570-m02","uid":"877d6c33-8fe2-4387-b833-27d6785aebbb","resourceVersion":"536","creationTimestamp":"2023-08-30T20:26:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2569 chars]
	I0830 20:26:57.055739  241645 node_ready.go:58] node "multinode-944570-m02" has status "Ready":"False"
	I0830 20:26:57.552646  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570-m02
	I0830 20:26:57.552676  241645 round_trippers.go:469] Request Headers:
	I0830 20:26:57.552692  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:26:57.552698  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:26:57.556398  241645 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 20:26:57.556469  241645 round_trippers.go:577] Response Headers:
	I0830 20:26:57.556487  241645 round_trippers.go:580]     Audit-Id: 7bee2d41-58d7-4568-9fca-95149c42f714
	I0830 20:26:57.556495  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:26:57.556501  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:26:57.556507  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:26:57.556513  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:26:57.556528  241645 round_trippers.go:580]     Content-Length: 3593
	I0830 20:26:57.556534  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:26:57 GMT
	I0830 20:26:57.556634  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570-m02","uid":"877d6c33-8fe2-4387-b833-27d6785aebbb","resourceVersion":"536","creationTimestamp":"2023-08-30T20:26:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2569 chars]
	I0830 20:26:58.052279  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570-m02
	I0830 20:26:58.052302  241645 round_trippers.go:469] Request Headers:
	I0830 20:26:58.052314  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:26:58.052324  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:26:58.055751  241645 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 20:26:58.055776  241645 round_trippers.go:577] Response Headers:
	I0830 20:26:58.055787  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:26:58 GMT
	I0830 20:26:58.055795  241645 round_trippers.go:580]     Audit-Id: 444a9a46-a9ac-4d72-bae2-63959a4414b3
	I0830 20:26:58.055806  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:26:58.055815  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:26:58.055824  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:26:58.055833  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:26:58.055845  241645 round_trippers.go:580]     Content-Length: 3593
	I0830 20:26:58.055931  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570-m02","uid":"877d6c33-8fe2-4387-b833-27d6785aebbb","resourceVersion":"536","creationTimestamp":"2023-08-30T20:26:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2569 chars]
	I0830 20:26:58.552342  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570-m02
	I0830 20:26:58.552412  241645 round_trippers.go:469] Request Headers:
	I0830 20:26:58.552425  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:26:58.552432  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:26:58.555516  241645 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 20:26:58.555537  241645 round_trippers.go:577] Response Headers:
	I0830 20:26:58.555547  241645 round_trippers.go:580]     Audit-Id: a39077d2-2a79-48f2-a333-2af8a33e50fd
	I0830 20:26:58.555555  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:26:58.555566  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:26:58.555575  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:26:58.555585  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:26:58.555595  241645 round_trippers.go:580]     Content-Length: 3593
	I0830 20:26:58.555612  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:26:58 GMT
	I0830 20:26:58.555759  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570-m02","uid":"877d6c33-8fe2-4387-b833-27d6785aebbb","resourceVersion":"536","creationTimestamp":"2023-08-30T20:26:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2569 chars]
	I0830 20:26:59.052129  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570-m02
	I0830 20:26:59.052152  241645 round_trippers.go:469] Request Headers:
	I0830 20:26:59.052161  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:26:59.052166  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:26:59.054698  241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 20:26:59.054721  241645 round_trippers.go:577] Response Headers:
	I0830 20:26:59.054729  241645 round_trippers.go:580]     Audit-Id: 272dca67-50b6-458e-b85c-d6ac249276b6
	I0830 20:26:59.054735  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:26:59.054741  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:26:59.054746  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:26:59.054752  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:26:59.054757  241645 round_trippers.go:580]     Content-Length: 3593
	I0830 20:26:59.054763  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:26:59 GMT
	I0830 20:26:59.054813  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570-m02","uid":"877d6c33-8fe2-4387-b833-27d6785aebbb","resourceVersion":"536","creationTimestamp":"2023-08-30T20:26:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2569 chars]
	I0830 20:26:59.552600  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570-m02
	I0830 20:26:59.552636  241645 round_trippers.go:469] Request Headers:
	I0830 20:26:59.552650  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:26:59.552662  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:26:59.555615  241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 20:26:59.555647  241645 round_trippers.go:577] Response Headers:
	I0830 20:26:59.555661  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:26:59.555671  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:26:59.555680  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:26:59.555689  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:26:59.555702  241645 round_trippers.go:580]     Content-Length: 3862
	I0830 20:26:59.555708  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:26:59 GMT
	I0830 20:26:59.555717  241645 round_trippers.go:580]     Audit-Id: a29ce3e5-7430-4301-97b1-d11619163f9d
	I0830 20:26:59.555821  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570-m02","uid":"877d6c33-8fe2-4387-b833-27d6785aebbb","resourceVersion":"553","creationTimestamp":"2023-08-30T20:26:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2838 chars]
	I0830 20:26:59.556146  241645 node_ready.go:58] node "multinode-944570-m02" has status "Ready":"False"
	I0830 20:27:00.051872  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570-m02
	I0830 20:27:00.051892  241645 round_trippers.go:469] Request Headers:
	I0830 20:27:00.051901  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:27:00.051908  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:27:00.054896  241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 20:27:00.054915  241645 round_trippers.go:577] Response Headers:
	I0830 20:27:00.054922  241645 round_trippers.go:580]     Content-Length: 3862
	I0830 20:27:00.054928  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:27:00 GMT
	I0830 20:27:00.054934  241645 round_trippers.go:580]     Audit-Id: ebf3b28e-9db0-432a-a4c0-e9dc1c97c040
	I0830 20:27:00.054943  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:27:00.054957  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:27:00.054965  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:27:00.054977  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:27:00.055103  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570-m02","uid":"877d6c33-8fe2-4387-b833-27d6785aebbb","resourceVersion":"553","creationTimestamp":"2023-08-30T20:26:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2838 chars]
	I0830 20:27:00.551695  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570-m02
	I0830 20:27:00.551718  241645 round_trippers.go:469] Request Headers:
	I0830 20:27:00.551727  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:27:00.551733  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:27:00.555088  241645 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 20:27:00.555106  241645 round_trippers.go:577] Response Headers:
	I0830 20:27:00.555113  241645 round_trippers.go:580]     Audit-Id: 9b35cbd3-7b6d-4037-847d-0d49cb9ab635
	I0830 20:27:00.555118  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:27:00.555130  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:27:00.555148  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:27:00.555158  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:27:00.555167  241645 round_trippers.go:580]     Content-Length: 3862
	I0830 20:27:00.555173  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:27:00 GMT
	I0830 20:27:00.555257  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570-m02","uid":"877d6c33-8fe2-4387-b833-27d6785aebbb","resourceVersion":"553","creationTimestamp":"2023-08-30T20:26:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2838 chars]
	I0830 20:27:01.051835  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570-m02
	I0830 20:27:01.051881  241645 round_trippers.go:469] Request Headers:
	I0830 20:27:01.051895  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:27:01.051906  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:27:01.054939  241645 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 20:27:01.054959  241645 round_trippers.go:577] Response Headers:
	I0830 20:27:01.054967  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:27:01.054973  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:27:01.054978  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:27:01.054984  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:27:01.054989  241645 round_trippers.go:580]     Content-Length: 3728
	I0830 20:27:01.055001  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:27:01 GMT
	I0830 20:27:01.055008  241645 round_trippers.go:580]     Audit-Id: d5959320-74aa-479c-a0bf-f110e46db008
	I0830 20:27:01.055095  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570-m02","uid":"877d6c33-8fe2-4387-b833-27d6785aebbb","resourceVersion":"559","creationTimestamp":"2023-08-30T20:26:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2704 chars]
	I0830 20:27:01.055359  241645 node_ready.go:49] node "multinode-944570-m02" has status "Ready":"True"
	I0830 20:27:01.055379  241645 node_ready.go:38] duration metric: took 11.012463872s waiting for node "multinode-944570-m02" to be "Ready" ...
	I0830 20:27:01.055389  241645 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 20:27:01.055454  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods
	I0830 20:27:01.055462  241645 round_trippers.go:469] Request Headers:
	I0830 20:27:01.055469  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:27:01.055476  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:27:01.065361  241645 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0830 20:27:01.065390  241645 round_trippers.go:577] Response Headers:
	I0830 20:27:01.065401  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:27:01.065409  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:27:01 GMT
	I0830 20:27:01.065415  241645 round_trippers.go:580]     Audit-Id: 7c208406-7adc-4ffe-bf53-b35a15638314
	I0830 20:27:01.065420  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:27:01.065425  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:27:01.065430  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:27:01.067625  241645 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"559"},"items":[{"metadata":{"name":"coredns-5dd5756b68-lzj6n","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"19a6c9fa-86e0-4e7f-a62b-28ee984bdd45","resourceVersion":"453","creationTimestamp":"2023-08-30T20:25:38Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"45f04028-4f16-400d-9690-5524a005f3c9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:25:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45f04028-4f16-400d-9690-5524a005f3c9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 67516 chars]
	I0830 20:27:01.069692  241645 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-lzj6n" in "kube-system" namespace to be "Ready" ...
	I0830 20:27:01.069774  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lzj6n
	I0830 20:27:01.069786  241645 round_trippers.go:469] Request Headers:
	I0830 20:27:01.069798  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:27:01.069805  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:27:01.072491  241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 20:27:01.072515  241645 round_trippers.go:577] Response Headers:
	I0830 20:27:01.072525  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:27:01.072534  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:27:01.072542  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:27:01.072550  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:27:01 GMT
	I0830 20:27:01.072562  241645 round_trippers.go:580]     Audit-Id: 96928a7b-c510-49da-9fb6-48c5efc4a787
	I0830 20:27:01.072571  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:27:01.072680  241645 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lzj6n","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"19a6c9fa-86e0-4e7f-a62b-28ee984bdd45","resourceVersion":"453","creationTimestamp":"2023-08-30T20:25:38Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"45f04028-4f16-400d-9690-5524a005f3c9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:25:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45f04028-4f16-400d-9690-5524a005f3c9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6284 chars]
	I0830 20:27:01.073238  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
	I0830 20:27:01.073256  241645 round_trippers.go:469] Request Headers:
	I0830 20:27:01.073266  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:27:01.073280  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:27:01.075216  241645 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0830 20:27:01.075229  241645 round_trippers.go:577] Response Headers:
	I0830 20:27:01.075236  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:27:01.075242  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:27:01.075251  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:27:01.075259  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:27:01.075271  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:27:01 GMT
	I0830 20:27:01.075279  241645 round_trippers.go:580]     Audit-Id: b9fb716b-863d-4d9c-8209-eec48de62088
	I0830 20:27:01.075628  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"462","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0830 20:27:01.075991  241645 pod_ready.go:92] pod "coredns-5dd5756b68-lzj6n" in "kube-system" namespace has status "Ready":"True"
	I0830 20:27:01.076008  241645 pod_ready.go:81] duration metric: took 6.295046ms waiting for pod "coredns-5dd5756b68-lzj6n" in "kube-system" namespace to be "Ready" ...
	I0830 20:27:01.076016  241645 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-944570" in "kube-system" namespace to be "Ready" ...
	I0830 20:27:01.076065  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-944570
	I0830 20:27:01.076072  241645 round_trippers.go:469] Request Headers:
	I0830 20:27:01.076079  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:27:01.076086  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:27:01.077742  241645 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0830 20:27:01.077760  241645 round_trippers.go:577] Response Headers:
	I0830 20:27:01.077777  241645 round_trippers.go:580]     Audit-Id: a23ad99d-a893-43d3-a065-54aaa94e08bb
	I0830 20:27:01.077787  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:27:01.077800  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:27:01.077808  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:27:01.077821  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:27:01.077833  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:27:01 GMT
	I0830 20:27:01.077932  241645 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-944570","namespace":"kube-system","uid":"8a7e3daf-bab9-401d-9448-0dd7a1710cc9","resourceVersion":"424","creationTimestamp":"2023-08-30T20:25:26Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.254:2379","kubernetes.io/config.hash":"fb846e75466869998dbb9a265eafadb1","kubernetes.io/config.mirror":"fb846e75466869998dbb9a265eafadb1","kubernetes.io/config.seen":"2023-08-30T20:25:25.839839858Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:25:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5872 chars]
	I0830 20:27:01.078378  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
	I0830 20:27:01.078395  241645 round_trippers.go:469] Request Headers:
	I0830 20:27:01.078404  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:27:01.078417  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:27:01.079894  241645 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0830 20:27:01.079907  241645 round_trippers.go:577] Response Headers:
	I0830 20:27:01.079913  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:27:01.079918  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:27:01.079924  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:27:01.079932  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:27:01 GMT
	I0830 20:27:01.079944  241645 round_trippers.go:580]     Audit-Id: 01a147fc-973a-4f1c-b4e8-886ef6e6d0e5
	I0830 20:27:01.079955  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:27:01.080099  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"462","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0830 20:27:01.080333  241645 pod_ready.go:92] pod "etcd-multinode-944570" in "kube-system" namespace has status "Ready":"True"
	I0830 20:27:01.080344  241645 pod_ready.go:81] duration metric: took 4.323512ms waiting for pod "etcd-multinode-944570" in "kube-system" namespace to be "Ready" ...
	I0830 20:27:01.080357  241645 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-944570" in "kube-system" namespace to be "Ready" ...
	I0830 20:27:01.080398  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-944570
	I0830 20:27:01.080405  241645 round_trippers.go:469] Request Headers:
	I0830 20:27:01.080412  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:27:01.080417  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:27:01.082010  241645 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0830 20:27:01.082027  241645 round_trippers.go:577] Response Headers:
	I0830 20:27:01.082033  241645 round_trippers.go:580]     Audit-Id: 1c7dcfc6-70d9-4ffa-82f1-e4b4190b5ff7
	I0830 20:27:01.082038  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:27:01.082043  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:27:01.082050  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:27:01.082056  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:27:01.082062  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:27:01 GMT
	I0830 20:27:01.082219  241645 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-944570","namespace":"kube-system","uid":"396cdb5a-0161-4c66-8588-6c1c62cae7be","resourceVersion":"425","creationTimestamp":"2023-08-30T20:25:26Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.254:8443","kubernetes.io/config.hash":"5c113dc76381297356051f3bc6bc6fd1","kubernetes.io/config.mirror":"5c113dc76381297356051f3bc6bc6fd1","kubernetes.io/config.seen":"2023-08-30T20:25:25.839841108Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:25:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7408 chars]
	I0830 20:27:01.082529  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
	I0830 20:27:01.082540  241645 round_trippers.go:469] Request Headers:
	I0830 20:27:01.082547  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:27:01.082552  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:27:01.084675  241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 20:27:01.084698  241645 round_trippers.go:577] Response Headers:
	I0830 20:27:01.084704  241645 round_trippers.go:580]     Audit-Id: ab0a710d-cae2-488e-9653-c41dc1031fa0
	I0830 20:27:01.084709  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:27:01.084715  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:27:01.084720  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:27:01.084725  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:27:01.084730  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:27:01 GMT
	I0830 20:27:01.084823  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"462","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0830 20:27:01.085053  241645 pod_ready.go:92] pod "kube-apiserver-multinode-944570" in "kube-system" namespace has status "Ready":"True"
	I0830 20:27:01.085063  241645 pod_ready.go:81] duration metric: took 4.701298ms waiting for pod "kube-apiserver-multinode-944570" in "kube-system" namespace to be "Ready" ...
	I0830 20:27:01.085071  241645 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-944570" in "kube-system" namespace to be "Ready" ...
	I0830 20:27:01.085110  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-944570
	I0830 20:27:01.085118  241645 round_trippers.go:469] Request Headers:
	I0830 20:27:01.085124  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:27:01.085131  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:27:01.086886  241645 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0830 20:27:01.086901  241645 round_trippers.go:577] Response Headers:
	I0830 20:27:01.086906  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:27:01.086912  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:27:01.086917  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:27:01 GMT
	I0830 20:27:01.086926  241645 round_trippers.go:580]     Audit-Id: 899dd9a5-0b3e-4c18-8465-ee73198a8bdc
	I0830 20:27:01.086933  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:27:01.086946  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:27:01.087093  241645 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-944570","namespace":"kube-system","uid":"6666fc21-62a9-4141-bb88-71bd4fe72b40","resourceVersion":"421","creationTimestamp":"2023-08-30T20:25:26Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ed3bbefd4c2f35595e2c0897a29a0a1c","kubernetes.io/config.mirror":"ed3bbefd4c2f35595e2c0897a29a0a1c","kubernetes.io/config.seen":"2023-08-30T20:25:25.839841993Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:25:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6973 chars]
	I0830 20:27:01.087425  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
	I0830 20:27:01.087437  241645 round_trippers.go:469] Request Headers:
	I0830 20:27:01.087444  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:27:01.087450  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:27:01.089061  241645 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0830 20:27:01.089078  241645 round_trippers.go:577] Response Headers:
	I0830 20:27:01.089085  241645 round_trippers.go:580]     Audit-Id: 73586935-479b-4fa5-a5e9-fc87c7811a4d
	I0830 20:27:01.089090  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:27:01.089096  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:27:01.089104  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:27:01.089109  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:27:01.089118  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:27:01 GMT
	I0830 20:27:01.089201  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"462","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0830 20:27:01.089419  241645 pod_ready.go:92] pod "kube-controller-manager-multinode-944570" in "kube-system" namespace has status "Ready":"True"
	I0830 20:27:01.089431  241645 pod_ready.go:81] duration metric: took 4.354056ms waiting for pod "kube-controller-manager-multinode-944570" in "kube-system" namespace to be "Ready" ...
	I0830 20:27:01.089439  241645 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hrz7d" in "kube-system" namespace to be "Ready" ...
	I0830 20:27:01.252930  241645 request.go:629] Waited for 163.400815ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hrz7d
	I0830 20:27:01.253002  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hrz7d
	I0830 20:27:01.253007  241645 round_trippers.go:469] Request Headers:
	I0830 20:27:01.253016  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:27:01.253023  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:27:01.255571  241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 20:27:01.255599  241645 round_trippers.go:577] Response Headers:
	I0830 20:27:01.255614  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:27:01 GMT
	I0830 20:27:01.255628  241645 round_trippers.go:580]     Audit-Id: d480c133-5b3d-48f3-ab73-b24b84173c92
	I0830 20:27:01.255641  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:27:01.255651  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:27:01.255658  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:27:01.255681  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:27:01.255819  241645 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hrz7d","generateName":"kube-proxy-","namespace":"kube-system","uid":"eb29e83b-aacd-4b74-b7f5-7f96252efba6","resourceVersion":"544","creationTimestamp":"2023-08-30T20:26:49Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"77539e61-eb1a-4d08-91c1-22ad50311843","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77539e61-eb1a-4d08-91c1-22ad50311843\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5541 chars]
	I0830 20:27:01.452643  241645 request.go:629] Waited for 196.355886ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.254:8443/api/v1/nodes/multinode-944570-m02
	I0830 20:27:01.452732  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570-m02
	I0830 20:27:01.452739  241645 round_trippers.go:469] Request Headers:
	I0830 20:27:01.452746  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:27:01.452755  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:27:01.455646  241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 20:27:01.455672  241645 round_trippers.go:577] Response Headers:
	I0830 20:27:01.455679  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:27:01 GMT
	I0830 20:27:01.455684  241645 round_trippers.go:580]     Audit-Id: 64483f25-d1e9-4144-8f5f-8cecaa9f922b
	I0830 20:27:01.455690  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:27:01.455700  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:27:01.455705  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:27:01.455711  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:27:01.455720  241645 round_trippers.go:580]     Content-Length: 3728
	I0830 20:27:01.455808  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570-m02","uid":"877d6c33-8fe2-4387-b833-27d6785aebbb","resourceVersion":"559","creationTimestamp":"2023-08-30T20:26:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2704 chars]
	I0830 20:27:01.456050  241645 pod_ready.go:92] pod "kube-proxy-hrz7d" in "kube-system" namespace has status "Ready":"True"
	I0830 20:27:01.456062  241645 pod_ready.go:81] duration metric: took 366.618979ms waiting for pod "kube-proxy-hrz7d" in "kube-system" namespace to be "Ready" ...
	I0830 20:27:01.456071  241645 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nqnp2" in "kube-system" namespace to be "Ready" ...
	I0830 20:27:01.652572  241645 request.go:629] Waited for 196.401598ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nqnp2
	I0830 20:27:01.652635  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nqnp2
	I0830 20:27:01.652640  241645 round_trippers.go:469] Request Headers:
	I0830 20:27:01.652647  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:27:01.652657  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:27:01.656709  241645 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0830 20:27:01.656730  241645 round_trippers.go:577] Response Headers:
	I0830 20:27:01.656737  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:27:01.656743  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:27:01.656752  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:27:01.656766  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:27:01 GMT
	I0830 20:27:01.656774  241645 round_trippers.go:580]     Audit-Id: 93f5a92c-5ec8-4de0-91cb-305e7cc512ca
	I0830 20:27:01.656781  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:27:01.656913  241645 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-nqnp2","generateName":"kube-proxy-","namespace":"kube-system","uid":"fc7f17e0-b6ac-48c3-b449-e4eb3325505c","resourceVersion":"408","creationTimestamp":"2023-08-30T20:25:38Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"77539e61-eb1a-4d08-91c1-22ad50311843","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:25:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77539e61-eb1a-4d08-91c1-22ad50311843\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5537 chars]
	I0830 20:27:01.852744  241645 request.go:629] Waited for 195.40388ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.254:8443/api/v1/nodes/multinode-944570
	I0830 20:27:01.852821  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
	I0830 20:27:01.852826  241645 round_trippers.go:469] Request Headers:
	I0830 20:27:01.852834  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:27:01.852843  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:27:01.855391  241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 20:27:01.855413  241645 round_trippers.go:577] Response Headers:
	I0830 20:27:01.855420  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:27:01 GMT
	I0830 20:27:01.855427  241645 round_trippers.go:580]     Audit-Id: 190e8688-0718-4b50-8e07-8cab55d8cfa2
	I0830 20:27:01.855432  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:27:01.855437  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:27:01.855444  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:27:01.855450  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:27:01.855818  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"462","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0830 20:27:01.856146  241645 pod_ready.go:92] pod "kube-proxy-nqnp2" in "kube-system" namespace has status "Ready":"True"
	I0830 20:27:01.856161  241645 pod_ready.go:81] duration metric: took 400.084355ms waiting for pod "kube-proxy-nqnp2" in "kube-system" namespace to be "Ready" ...
	I0830 20:27:01.856178  241645 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-944570" in "kube-system" namespace to be "Ready" ...
	I0830 20:27:02.052484  241645 request.go:629] Waited for 196.215764ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-944570
	I0830 20:27:02.052569  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-944570
	I0830 20:27:02.052576  241645 round_trippers.go:469] Request Headers:
	I0830 20:27:02.052584  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:27:02.052593  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:27:02.055473  241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 20:27:02.055495  241645 round_trippers.go:577] Response Headers:
	I0830 20:27:02.055505  241645 round_trippers.go:580]     Audit-Id: 87f8eb24-a1b1-4989-8317-e50415cc134a
	I0830 20:27:02.055524  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:27:02.055533  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:27:02.055541  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:27:02.055552  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:27:02.055557  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:27:02 GMT
	I0830 20:27:02.055781  241645 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-944570","namespace":"kube-system","uid":"c2c628f7-bc4f-4f01-b67d-e105c72b8275","resourceVersion":"422","creationTimestamp":"2023-08-30T20:25:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"21d92ce9120286f1f3c68c1f19570340","kubernetes.io/config.mirror":"21d92ce9120286f1f3c68c1f19570340","kubernetes.io/config.seen":"2023-08-30T20:25:25.839835923Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:25:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4703 chars]
	I0830 20:27:02.252545  241645 request.go:629] Waited for 196.379889ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.254:8443/api/v1/nodes/multinode-944570
	I0830 20:27:02.252626  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
	I0830 20:27:02.252631  241645 round_trippers.go:469] Request Headers:
	I0830 20:27:02.252639  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:27:02.252645  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:27:02.255037  241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 20:27:02.255058  241645 round_trippers.go:577] Response Headers:
	I0830 20:27:02.255065  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:27:02.255071  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:27:02.255076  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:27:02.255081  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:27:02 GMT
	I0830 20:27:02.255087  241645 round_trippers.go:580]     Audit-Id: 64ab7396-2c86-4e21-9b70-9dac81c2d387
	I0830 20:27:02.255092  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:27:02.255325  241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"462","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0830 20:27:02.255637  241645 pod_ready.go:92] pod "kube-scheduler-multinode-944570" in "kube-system" namespace has status "Ready":"True"
	I0830 20:27:02.255652  241645 pod_ready.go:81] duration metric: took 399.46153ms waiting for pod "kube-scheduler-multinode-944570" in "kube-system" namespace to be "Ready" ...
	I0830 20:27:02.255661  241645 pod_ready.go:38] duration metric: took 1.200257143s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 20:27:02.255683  241645 system_svc.go:44] waiting for kubelet service to be running ....
	I0830 20:27:02.255732  241645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 20:27:02.268139  241645 system_svc.go:56] duration metric: took 12.447616ms WaitForService to wait for kubelet.
	I0830 20:27:02.268168  241645 kubeadm.go:581] duration metric: took 12.243439268s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0830 20:27:02.268193  241645 node_conditions.go:102] verifying NodePressure condition ...
	I0830 20:27:02.452461  241645 request.go:629] Waited for 184.179451ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.254:8443/api/v1/nodes
	I0830 20:27:02.452529  241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes
	I0830 20:27:02.452533  241645 round_trippers.go:469] Request Headers:
	I0830 20:27:02.452541  241645 round_trippers.go:473]     Accept: application/json, */*
	I0830 20:27:02.452548  241645 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 20:27:02.455243  241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 20:27:02.455261  241645 round_trippers.go:577] Response Headers:
	I0830 20:27:02.455269  241645 round_trippers.go:580]     Audit-Id: f38bc00a-92e4-47fa-bad4-b656b9097cf9
	I0830 20:27:02.455275  241645 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 20:27:02.455281  241645 round_trippers.go:580]     Content-Type: application/json
	I0830 20:27:02.455286  241645 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
	I0830 20:27:02.455306  241645 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
	I0830 20:27:02.455317  241645 round_trippers.go:580]     Date: Wed, 30 Aug 2023 20:27:02 GMT
	I0830 20:27:02.455515  241645 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"559"},"items":[{"metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"462","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 8708 chars]
	I0830 20:27:02.456094  241645 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0830 20:27:02.456112  241645 node_conditions.go:123] node cpu capacity is 2
	I0830 20:27:02.456123  241645 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0830 20:27:02.456130  241645 node_conditions.go:123] node cpu capacity is 2
	I0830 20:27:02.456143  241645 node_conditions.go:105] duration metric: took 187.937298ms to run NodePressure ...
	I0830 20:27:02.456156  241645 start.go:228] waiting for startup goroutines ...
	I0830 20:27:02.456187  241645 start.go:242] writing updated cluster config ...
	I0830 20:27:02.456584  241645 ssh_runner.go:195] Run: rm -f paused
	I0830 20:27:02.505879  241645 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0830 20:27:02.508797  241645 out.go:177] * Done! kubectl is now configured to use "multinode-944570" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Wed 2023-08-30 20:24:49 UTC, ends at Wed 2023-08-30 20:28:25 UTC. --
	Aug 30 20:25:51 multinode-944570 dockerd[1122]: time="2023-08-30T20:25:51.482193656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 30 20:25:51 multinode-944570 dockerd[1122]: time="2023-08-30T20:25:51.483519014Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 30 20:25:51 multinode-944570 dockerd[1122]: time="2023-08-30T20:25:51.483631848Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 30 20:25:51 multinode-944570 dockerd[1122]: time="2023-08-30T20:25:51.483662132Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 30 20:25:51 multinode-944570 dockerd[1122]: time="2023-08-30T20:25:51.483676215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 30 20:25:51 multinode-944570 cri-dockerd[1011]: time="2023-08-30T20:25:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1710b141f702688f2ac6c1123dd35b15c5c3dcf83e6a5b1ea4bbe967a5b28b11/resolv.conf as [nameserver 192.168.122.1]"
	Aug 30 20:25:52 multinode-944570 cri-dockerd[1011]: time="2023-08-30T20:25:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/206abe062563a58c0cfef43fd491b8a3cae33b87e0cc0fced346e41ef4ec84e9/resolv.conf as [nameserver 192.168.122.1]"
	Aug 30 20:25:52 multinode-944570 dockerd[1122]: time="2023-08-30T20:25:52.103856996Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 30 20:25:52 multinode-944570 dockerd[1122]: time="2023-08-30T20:25:52.103900355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 30 20:25:52 multinode-944570 dockerd[1122]: time="2023-08-30T20:25:52.103914789Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 30 20:25:52 multinode-944570 dockerd[1122]: time="2023-08-30T20:25:52.103927998Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 30 20:25:52 multinode-944570 dockerd[1122]: time="2023-08-30T20:25:52.128211475Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 30 20:25:52 multinode-944570 dockerd[1122]: time="2023-08-30T20:25:52.129968934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 30 20:25:52 multinode-944570 dockerd[1122]: time="2023-08-30T20:25:52.130290431Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 30 20:25:52 multinode-944570 dockerd[1122]: time="2023-08-30T20:25:52.130363924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 30 20:27:03 multinode-944570 dockerd[1122]: time="2023-08-30T20:27:03.653262628Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 30 20:27:03 multinode-944570 dockerd[1122]: time="2023-08-30T20:27:03.653322242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 30 20:27:03 multinode-944570 dockerd[1122]: time="2023-08-30T20:27:03.653347046Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 30 20:27:03 multinode-944570 dockerd[1122]: time="2023-08-30T20:27:03.653360640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 30 20:27:04 multinode-944570 cri-dockerd[1011]: time="2023-08-30T20:27:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9a370fce241da53845a9c9e91de36ae198942c881204465bc22a4c8c1b27b095/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 30 20:27:06 multinode-944570 cri-dockerd[1011]: time="2023-08-30T20:27:06Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Aug 30 20:27:06 multinode-944570 dockerd[1122]: time="2023-08-30T20:27:06.203490979Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 30 20:27:06 multinode-944570 dockerd[1122]: time="2023-08-30T20:27:06.203696862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 30 20:27:06 multinode-944570 dockerd[1122]: time="2023-08-30T20:27:06.203716638Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 30 20:27:06 multinode-944570 dockerd[1122]: time="2023-08-30T20:27:06.203729107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID
	e1b3528e7e0a9       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   About a minute ago   Running             busybox                   0                   9a370fce241da
	cd5d628cc7d23       6e38f40d628db                                                                                         2 minutes ago        Running             storage-provisioner       0                   206abe062563a
	b8869105783a6       ead0a4a53df89                                                                                         2 minutes ago        Running             coredns                   0                   1710b141f7026
	750968b9a2208       kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974              2 minutes ago        Running             kindnet-cni               0                   2ed885733ebb0
	9331896493b39       6cdbabde3874e                                                                                         2 minutes ago        Running             kube-proxy                0                   d815034079fed
	b1920bbf2f90a       821b3dfea27be                                                                                         3 minutes ago        Running             kube-controller-manager   0                   6031b9bfee95a
	25034328bbdc8       b462ce0c8b1ff                                                                                         3 minutes ago        Running             kube-scheduler            0                   2d451861388c9
	adc09d4d4deb2       5c801295c21d0                                                                                         3 minutes ago        Running             kube-apiserver            0                   34fdd725e5e61
	2825b7061ea0c       73deb9a3f7025                                                                                         3 minutes ago        Running             etcd                      0                   185a0d6cacc72
	
	* 
	* ==> coredns [b8869105783a] <==
	* [INFO] 10.244.0.3:53917 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000093956s
	[INFO] 10.244.1.2:37188 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000204711s
	[INFO] 10.244.1.2:60110 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001691057s
	[INFO] 10.244.1.2:33919 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000152258s
	[INFO] 10.244.1.2:44704 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010678s
	[INFO] 10.244.1.2:37501 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001396637s
	[INFO] 10.244.1.2:58260 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000121533s
	[INFO] 10.244.1.2:38025 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000098957s
	[INFO] 10.244.1.2:44825 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000081106s
	[INFO] 10.244.0.3:41959 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000099305s
	[INFO] 10.244.0.3:57839 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00006831s
	[INFO] 10.244.0.3:34586 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000072659s
	[INFO] 10.244.0.3:40296 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000054142s
	[INFO] 10.244.1.2:50191 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00016399s
	[INFO] 10.244.1.2:59772 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000160355s
	[INFO] 10.244.1.2:33407 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000120004s
	[INFO] 10.244.1.2:41985 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000135351s
	[INFO] 10.244.0.3:55767 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153943s
	[INFO] 10.244.0.3:52348 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000222259s
	[INFO] 10.244.0.3:54368 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000177557s
	[INFO] 10.244.0.3:59329 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000714776s
	[INFO] 10.244.1.2:46152 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000232532s
	[INFO] 10.244.1.2:60653 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000151329s
	[INFO] 10.244.1.2:52486 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000177787s
	[INFO] 10.244.1.2:39308 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000092672s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-944570
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-944570
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7e60a4db8510b81002db541520f138fed781588
	                    minikube.k8s.io/name=multinode-944570
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_30T20_25_27_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 30 Aug 2023 20:25:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-944570
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 30 Aug 2023 20:28:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 30 Aug 2023 20:27:28 +0000   Wed, 30 Aug 2023 20:25:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 30 Aug 2023 20:27:28 +0000   Wed, 30 Aug 2023 20:25:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 30 Aug 2023 20:27:28 +0000   Wed, 30 Aug 2023 20:25:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 30 Aug 2023 20:27:28 +0000   Wed, 30 Aug 2023 20:25:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.254
	  Hostname:    multinode-944570
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 476d2a07b648465491fd90796577f2f4
	  System UUID:                476d2a07-b648-4654-91fd-90796577f2f4
	  Boot ID:                    384d102f-72a3-4c8d-a8c3-b3c37e330022
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.5
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-fhrtd                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         83s
	  kube-system                 coredns-5dd5756b68-lzj6n                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     2m48s
	  kube-system                 etcd-multinode-944570                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         3m
	  kube-system                 kindnet-mm2wq                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m48s
	  kube-system                 kube-apiserver-multinode-944570             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m
	  kube-system                 kube-controller-manager-multinode-944570    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m
	  kube-system                 kube-proxy-nqnp2                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m48s
	  kube-system                 kube-scheduler-multinode-944570             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m46s                kube-proxy       
	  Normal  Starting                 3m8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m8s (x8 over 3m8s)  kubelet          Node multinode-944570 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m8s (x8 over 3m8s)  kubelet          Node multinode-944570 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m8s (x7 over 3m8s)  kubelet          Node multinode-944570 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 3m1s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m                   kubelet          Node multinode-944570 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m                   kubelet          Node multinode-944570 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m                   kubelet          Node multinode-944570 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m48s                node-controller  Node multinode-944570 event: Registered Node multinode-944570 in Controller
	  Normal  NodeReady                2m36s                kubelet          Node multinode-944570 status is now: NodeReady
	
	
	Name:               multinode-944570-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-944570-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 30 Aug 2023 20:26:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-944570-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 30 Aug 2023 20:28:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 30 Aug 2023 20:27:19 +0000   Wed, 30 Aug 2023 20:26:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 30 Aug 2023 20:27:19 +0000   Wed, 30 Aug 2023 20:26:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 30 Aug 2023 20:27:19 +0000   Wed, 30 Aug 2023 20:26:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 30 Aug 2023 20:27:19 +0000   Wed, 30 Aug 2023 20:27:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.87
	  Hostname:    multinode-944570-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 c216fdfaab6546fbad7f82c635ecd591
	  System UUID:                c216fdfa-ab65-46fb-ad7f-82c635ecd591
	  Boot ID:                    07082b47-b067-4d8d-bf7e-80a61581e642
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.5
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-n5m7r    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         83s
	  kube-system                 kindnet-z8vqm               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      97s
	  kube-system                 kube-proxy-hrz7d            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         97s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 91s                kube-proxy       
	  Normal  NodeHasSufficientMemory  97s (x5 over 99s)  kubelet          Node multinode-944570-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    97s (x5 over 99s)  kubelet          Node multinode-944570-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     97s (x5 over 99s)  kubelet          Node multinode-944570-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           93s                node-controller  Node multinode-944570-m02 event: Registered Node multinode-944570-m02 in Controller
	  Normal  NodeReady                86s                kubelet          Node multinode-944570-m02 status is now: NodeReady
	
	
	Name:               multinode-944570-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-944570-m03
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 30 Aug 2023 20:27:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-944570-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 30 Aug 2023 20:28:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 30 Aug 2023 20:27:52 +0000   Wed, 30 Aug 2023 20:27:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 30 Aug 2023 20:27:52 +0000   Wed, 30 Aug 2023 20:27:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 30 Aug 2023 20:27:52 +0000   Wed, 30 Aug 2023 20:27:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 30 Aug 2023 20:27:52 +0000   Wed, 30 Aug 2023 20:27:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.83
	  Hostname:    multinode-944570-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 53de3e7dea67440ab78c23344d9deeb7
	  System UUID:                53de3e7d-ea67-440a-b78c-23344d9deeb7
	  Boot ID:                    0f18c261-95b4-4797-a8c1-c19423e85cae
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.5
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-fdzvb       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      46s
	  kube-system                 kube-proxy-6d9l8    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 40s                kube-proxy       
	  Normal  Starting                 47s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  47s (x2 over 47s)  kubelet          Node multinode-944570-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    47s (x2 over 47s)  kubelet          Node multinode-944570-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     47s (x2 over 47s)  kubelet          Node multinode-944570-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  46s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           43s                node-controller  Node multinode-944570-m03 event: Registered Node multinode-944570-m03 in Controller
	  Normal  NodeReady                34s                kubelet          Node multinode-944570-m03 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.065621] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.196746] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.631564] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.136063] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.007151] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000011] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Aug30 20:25] systemd-fstab-generator[548]: Ignoring "noauto" for root device
	[  +0.101057] systemd-fstab-generator[559]: Ignoring "noauto" for root device
	[  +1.024348] systemd-fstab-generator[736]: Ignoring "noauto" for root device
	[  +0.265804] systemd-fstab-generator[775]: Ignoring "noauto" for root device
	[  +0.110360] systemd-fstab-generator[786]: Ignoring "noauto" for root device
	[  +0.114474] systemd-fstab-generator[799]: Ignoring "noauto" for root device
	[  +1.453910] systemd-fstab-generator[956]: Ignoring "noauto" for root device
	[  +0.105474] systemd-fstab-generator[967]: Ignoring "noauto" for root device
	[  +0.109964] systemd-fstab-generator[978]: Ignoring "noauto" for root device
	[  +0.108418] systemd-fstab-generator[989]: Ignoring "noauto" for root device
	[  +0.119546] systemd-fstab-generator[1003]: Ignoring "noauto" for root device
	[  +4.078629] systemd-fstab-generator[1107]: Ignoring "noauto" for root device
	[  +4.286668] kauditd_printk_skb: 53 callbacks suppressed
	[  +3.588072] systemd-fstab-generator[1433]: Ignoring "noauto" for root device
	[  +7.749620] systemd-fstab-generator[2340]: Ignoring "noauto" for root device
	[ +14.369829] kauditd_printk_skb: 39 callbacks suppressed
	[  +7.238165] kauditd_printk_skb: 14 callbacks suppressed
	
	* 
	* ==> etcd [2825b7061ea0] <==
	* {"level":"info","ts":"2023-08-30T20:25:20.10645Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b8de1e5bd82ef2a became candidate at term 2"}
	{"level":"info","ts":"2023-08-30T20:25:20.106455Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b8de1e5bd82ef2a received MsgVoteResp from 9b8de1e5bd82ef2a at term 2"}
	{"level":"info","ts":"2023-08-30T20:25:20.106463Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b8de1e5bd82ef2a became leader at term 2"}
	{"level":"info","ts":"2023-08-30T20:25:20.10647Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9b8de1e5bd82ef2a elected leader 9b8de1e5bd82ef2a at term 2"}
	{"level":"info","ts":"2023-08-30T20:25:20.107721Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-30T20:25:20.109839Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7053bcffcda7710c","local-member-id":"9b8de1e5bd82ef2a","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-30T20:25:20.109933Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-30T20:25:20.109951Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-30T20:25:20.10996Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-30T20:25:20.109969Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9b8de1e5bd82ef2a","local-member-attributes":"{Name:multinode-944570 ClientURLs:[https://192.168.39.254:2379]}","request-path":"/0/members/9b8de1e5bd82ef2a/attributes","cluster-id":"7053bcffcda7710c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-08-30T20:25:20.109995Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-30T20:25:20.110999Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.254:2379"}
	{"level":"info","ts":"2023-08-30T20:25:20.111054Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-08-30T20:25:20.111326Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-08-30T20:25:20.111338Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-08-30T20:25:46.147973Z","caller":"traceutil/trace.go:171","msg":"trace[341296804] transaction","detail":"{read_only:false; response_revision:421; number_of_response:1; }","duration":"129.655578ms","start":"2023-08-30T20:25:46.018289Z","end":"2023-08-30T20:25:46.147945Z","steps":["trace[341296804] 'process raft request'  (duration: 129.464671ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-30T20:27:40.101784Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"169.633799ms","expected-duration":"100ms","prefix":"","request":"header:<ID:17233738966452560102 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-944570-m03.1780431ee2e77843\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-944570-m03.1780431ee2e77843\" value_size:642 lease:8010366929597783856 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2023-08-30T20:27:40.102087Z","caller":"traceutil/trace.go:171","msg":"trace[471184055] transaction","detail":"{read_only:false; response_revision:638; number_of_response:1; }","duration":"200.125754ms","start":"2023-08-30T20:27:39.901933Z","end":"2023-08-30T20:27:40.102059Z","steps":["trace[471184055] 'process raft request'  (duration: 200.074805ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-30T20:27:40.102366Z","caller":"traceutil/trace.go:171","msg":"trace[1809665780] transaction","detail":"{read_only:false; response_revision:637; number_of_response:1; }","duration":"256.669107ms","start":"2023-08-30T20:27:39.845688Z","end":"2023-08-30T20:27:40.102357Z","steps":["trace[1809665780] 'process raft request'  (duration: 85.440625ms)","trace[1809665780] 'compare'  (duration: 169.276548ms)"],"step_count":2}
	{"level":"info","ts":"2023-08-30T20:27:40.102543Z","caller":"traceutil/trace.go:171","msg":"trace[424198061] linearizableReadLoop","detail":"{readStateIndex:674; appliedIndex:673; }","duration":"237.07016ms","start":"2023-08-30T20:27:39.865464Z","end":"2023-08-30T20:27:40.102535Z","steps":["trace[424198061] 'read index received'  (duration: 65.671284ms)","trace[424198061] 'applied index is now lower than readState.Index'  (duration: 171.397525ms)"],"step_count":2}
	{"level":"warn","ts":"2023-08-30T20:27:40.102776Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"237.321814ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/multinode-944570-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-08-30T20:27:40.102809Z","caller":"traceutil/trace.go:171","msg":"trace[116404222] range","detail":"{range_begin:/registry/csinodes/multinode-944570-m03; range_end:; response_count:0; response_revision:638; }","duration":"237.361556ms","start":"2023-08-30T20:27:39.86544Z","end":"2023-08-30T20:27:40.102801Z","steps":["trace[116404222] 'agreement among raft nodes before linearized reading'  (duration: 237.282389ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-30T20:27:40.103049Z","caller":"traceutil/trace.go:171","msg":"trace[295565228] transaction","detail":"{read_only:false; response_revision:639; number_of_response:1; }","duration":"166.929682ms","start":"2023-08-30T20:27:39.936111Z","end":"2023-08-30T20:27:40.103041Z","steps":["trace[295565228] 'process raft request'  (duration: 166.853071ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-30T20:27:44.060424Z","caller":"traceutil/trace.go:171","msg":"trace[134648594] transaction","detail":"{read_only:false; response_revision:666; number_of_response:1; }","duration":"181.595056ms","start":"2023-08-30T20:27:43.878784Z","end":"2023-08-30T20:27:44.060379Z","steps":["trace[134648594] 'process raft request'  (duration: 181.444379ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-30T20:27:44.438441Z","caller":"traceutil/trace.go:171","msg":"trace[308397188] transaction","detail":"{read_only:false; response_revision:667; number_of_response:1; }","duration":"127.691503ms","start":"2023-08-30T20:27:44.310733Z","end":"2023-08-30T20:27:44.438425Z","steps":["trace[308397188] 'process raft request'  (duration: 62.470279ms)","trace[308397188] 'compare'  (duration: 65.144951ms)"],"step_count":2}
	
	* 
	* ==> kernel <==
	*  20:28:26 up 3 min,  0 users,  load average: 0.27, 0.28, 0.12
	Linux multinode-944570 5.10.57 #1 SMP Thu Aug 24 14:58:46 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [750968b9a220] <==
	* I0830 20:27:47.280213       1 main.go:223] Handling node with IPs: map[192.168.39.254:{}]
	I0830 20:27:47.280271       1 main.go:227] handling current node
	I0830 20:27:47.280298       1 main.go:223] Handling node with IPs: map[192.168.39.87:{}]
	I0830 20:27:47.280308       1 main.go:250] Node multinode-944570-m02 has CIDR [10.244.1.0/24] 
	I0830 20:27:47.281466       1 main.go:223] Handling node with IPs: map[192.168.39.83:{}]
	I0830 20:27:47.281564       1 main.go:250] Node multinode-944570-m03 has CIDR [10.244.2.0/24] 
	I0830 20:27:47.281797       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.168.39.83 Flags: [] Table: 0} 
	I0830 20:27:57.357932       1 main.go:223] Handling node with IPs: map[192.168.39.254:{}]
	I0830 20:27:57.358004       1 main.go:227] handling current node
	I0830 20:27:57.358026       1 main.go:223] Handling node with IPs: map[192.168.39.87:{}]
	I0830 20:27:57.358034       1 main.go:250] Node multinode-944570-m02 has CIDR [10.244.1.0/24] 
	I0830 20:27:57.358662       1 main.go:223] Handling node with IPs: map[192.168.39.83:{}]
	I0830 20:27:57.358683       1 main.go:250] Node multinode-944570-m03 has CIDR [10.244.2.0/24] 
	I0830 20:28:07.365459       1 main.go:223] Handling node with IPs: map[192.168.39.254:{}]
	I0830 20:28:07.365952       1 main.go:227] handling current node
	I0830 20:28:07.366146       1 main.go:223] Handling node with IPs: map[192.168.39.87:{}]
	I0830 20:28:07.366278       1 main.go:250] Node multinode-944570-m02 has CIDR [10.244.1.0/24] 
	I0830 20:28:07.366571       1 main.go:223] Handling node with IPs: map[192.168.39.83:{}]
	I0830 20:28:07.366737       1 main.go:250] Node multinode-944570-m03 has CIDR [10.244.2.0/24] 
	I0830 20:28:17.376317       1 main.go:223] Handling node with IPs: map[192.168.39.254:{}]
	I0830 20:28:17.376366       1 main.go:227] handling current node
	I0830 20:28:17.376381       1 main.go:223] Handling node with IPs: map[192.168.39.87:{}]
	I0830 20:28:17.376388       1 main.go:250] Node multinode-944570-m02 has CIDR [10.244.1.0/24] 
	I0830 20:28:17.376960       1 main.go:223] Handling node with IPs: map[192.168.39.83:{}]
	I0830 20:28:17.376995       1 main.go:250] Node multinode-944570-m03 has CIDR [10.244.2.0/24] 
	
	* 
	* ==> kube-apiserver [adc09d4d4deb] <==
	* I0830 20:25:22.597041       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0830 20:25:22.597831       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0830 20:25:22.597861       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0830 20:25:22.600816       1 controller.go:624] quota admission added evaluator for: namespaces
	I0830 20:25:22.610990       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0830 20:25:22.634578       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0830 20:25:22.635435       1 aggregator.go:166] initial CRD sync complete...
	I0830 20:25:22.635464       1 autoregister_controller.go:141] Starting autoregister controller
	I0830 20:25:22.635470       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0830 20:25:22.635476       1 cache.go:39] Caches are synced for autoregister controller
	I0830 20:25:23.398767       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0830 20:25:23.408509       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0830 20:25:23.408550       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0830 20:25:24.099887       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0830 20:25:24.143668       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0830 20:25:24.218698       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0830 20:25:24.225161       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.254]
	I0830 20:25:24.226021       1 controller.go:624] quota admission added evaluator for: endpoints
	I0830 20:25:24.232830       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0830 20:25:24.514851       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0830 20:25:25.700267       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0830 20:25:25.718273       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0830 20:25:25.731768       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0830 20:25:38.796450       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0830 20:25:38.810993       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [b1920bbf2f90] <==
	* I0830 20:26:49.038684       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-hrz7d"
	I0830 20:26:49.046556       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-z8vqm"
	I0830 20:26:53.780042       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-944570-m02"
	I0830 20:26:53.780457       1 event.go:307] "Event occurred" object="multinode-944570-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-944570-m02 event: Registered Node multinode-944570-m02 in Controller"
	I0830 20:27:00.888574       1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-944570-m02"
	I0830 20:27:03.208580       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I0830 20:27:03.222358       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-n5m7r"
	I0830 20:27:03.241367       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-fhrtd"
	I0830 20:27:03.260453       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="52.74723ms"
	I0830 20:27:03.282958       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="22.427535ms"
	I0830 20:27:03.298407       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="15.179588ms"
	I0830 20:27:03.298757       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="50.478µs"
	I0830 20:27:03.790057       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-n5m7r" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-n5m7r"
	I0830 20:27:06.109051       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="23.667083ms"
	I0830 20:27:06.109785       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="44.77µs"
	I0830 20:27:06.611733       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="4.529782ms"
	I0830 20:27:06.611996       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="43.693µs"
	I0830 20:27:40.105865       1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-944570-m02"
	I0830 20:27:40.106964       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-944570-m03\" does not exist"
	I0830 20:27:40.116181       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-944570-m03" podCIDRs=["10.244.2.0/24"]
	I0830 20:27:40.130772       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-6d9l8"
	I0830 20:27:40.131880       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-fdzvb"
	I0830 20:27:43.797065       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-944570-m03"
	I0830 20:27:43.797579       1 event.go:307] "Event occurred" object="multinode-944570-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-944570-m03 event: Registered Node multinode-944570-m03 in Controller"
	I0830 20:27:52.544342       1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-944570-m03"
	
	* 
	* ==> kube-proxy [9331896493b3] <==
	* I0830 20:25:39.814424       1 server_others.go:69] "Using iptables proxy"
	I0830 20:25:39.823656       1 node.go:141] Successfully retrieved node IP: 192.168.39.254
	I0830 20:25:39.892141       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0830 20:25:39.892181       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0830 20:25:39.894638       1 server_others.go:152] "Using iptables Proxier"
	I0830 20:25:39.894732       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0830 20:25:39.894972       1 server.go:846] "Version info" version="v1.28.1"
	I0830 20:25:39.894981       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0830 20:25:39.896868       1 config.go:188] "Starting service config controller"
	I0830 20:25:39.896925       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0830 20:25:39.896944       1 config.go:97] "Starting endpoint slice config controller"
	I0830 20:25:39.896948       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0830 20:25:39.899372       1 config.go:315] "Starting node config controller"
	I0830 20:25:39.899405       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0830 20:25:39.997975       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0830 20:25:39.998085       1 shared_informer.go:318] Caches are synced for service config
	I0830 20:25:40.000485       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [25034328bbdc] <==
	* W0830 20:25:22.571162       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0830 20:25:22.571796       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0830 20:25:22.571229       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0830 20:25:22.571950       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0830 20:25:22.571266       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0830 20:25:22.572170       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0830 20:25:22.574705       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0830 20:25:22.574742       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0830 20:25:23.432411       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0830 20:25:23.432482       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0830 20:25:23.504487       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0830 20:25:23.504740       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0830 20:25:23.523851       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0830 20:25:23.524157       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0830 20:25:23.612877       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0830 20:25:23.612920       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0830 20:25:23.677056       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0830 20:25:23.677080       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0830 20:25:23.677125       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0830 20:25:23.677164       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0830 20:25:23.769321       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0830 20:25:23.769404       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0830 20:25:24.055162       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0830 20:25:24.055427       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0830 20:25:26.345499       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-08-30 20:24:49 UTC, ends at Wed 2023-08-30 20:28:26 UTC. --
	Aug 30 20:25:51 multinode-944570 kubelet[2360]: I0830 20:25:51.031865    2360 topology_manager.go:215] "Topology Admit Handler" podUID="4e79c194-f047-45a2-9ed4-ffafbe983cda" podNamespace="kube-system" podName="storage-provisioner"
	Aug 30 20:25:51 multinode-944570 kubelet[2360]: I0830 20:25:51.050071    2360 topology_manager.go:215] "Topology Admit Handler" podUID="19a6c9fa-86e0-4e7f-a62b-28ee984bdd45" podNamespace="kube-system" podName="coredns-5dd5756b68-lzj6n"
	Aug 30 20:25:51 multinode-944570 kubelet[2360]: I0830 20:25:51.170243    2360 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19a6c9fa-86e0-4e7f-a62b-28ee984bdd45-config-volume\") pod \"coredns-5dd5756b68-lzj6n\" (UID: \"19a6c9fa-86e0-4e7f-a62b-28ee984bdd45\") " pod="kube-system/coredns-5dd5756b68-lzj6n"
	Aug 30 20:25:51 multinode-944570 kubelet[2360]: I0830 20:25:51.170321    2360 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4e79c194-f047-45a2-9ed4-ffafbe983cda-tmp\") pod \"storage-provisioner\" (UID: \"4e79c194-f047-45a2-9ed4-ffafbe983cda\") " pod="kube-system/storage-provisioner"
	Aug 30 20:25:51 multinode-944570 kubelet[2360]: I0830 20:25:51.170346    2360 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh2f6\" (UniqueName: \"kubernetes.io/projected/4e79c194-f047-45a2-9ed4-ffafbe983cda-kube-api-access-sh2f6\") pod \"storage-provisioner\" (UID: \"4e79c194-f047-45a2-9ed4-ffafbe983cda\") " pod="kube-system/storage-provisioner"
	Aug 30 20:25:51 multinode-944570 kubelet[2360]: I0830 20:25:51.170377    2360 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbqp4\" (UniqueName: \"kubernetes.io/projected/19a6c9fa-86e0-4e7f-a62b-28ee984bdd45-kube-api-access-hbqp4\") pod \"coredns-5dd5756b68-lzj6n\" (UID: \"19a6c9fa-86e0-4e7f-a62b-28ee984bdd45\") " pod="kube-system/coredns-5dd5756b68-lzj6n"
	Aug 30 20:25:51 multinode-944570 kubelet[2360]: I0830 20:25:51.940875    2360 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1710b141f702688f2ac6c1123dd35b15c5c3dcf83e6a5b1ea4bbe967a5b28b11"
	Aug 30 20:25:52 multinode-944570 kubelet[2360]: I0830 20:25:52.029409    2360 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="206abe062563a58c0cfef43fd491b8a3cae33b87e0cc0fced346e41ef4ec84e9"
	Aug 30 20:25:53 multinode-944570 kubelet[2360]: I0830 20:25:53.056685    2360 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.056573184 podCreationTimestamp="2023-08-30 20:25:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-08-30 20:25:53.056381 +0000 UTC m=+27.388191130" watchObservedRunningTime="2023-08-30 20:25:53.056573184 +0000 UTC m=+27.388383315"
	Aug 30 20:25:53 multinode-944570 kubelet[2360]: I0830 20:25:53.074074    2360 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-lzj6n" podStartSLOduration=15.074038115 podCreationTimestamp="2023-08-30 20:25:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-08-30 20:25:53.073980952 +0000 UTC m=+27.405791082" watchObservedRunningTime="2023-08-30 20:25:53.074038115 +0000 UTC m=+27.405848245"
	Aug 30 20:26:26 multinode-944570 kubelet[2360]: E0830 20:26:26.047107    2360 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 30 20:26:26 multinode-944570 kubelet[2360]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 30 20:26:26 multinode-944570 kubelet[2360]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 30 20:26:26 multinode-944570 kubelet[2360]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 30 20:27:03 multinode-944570 kubelet[2360]: I0830 20:27:03.257824    2360 topology_manager.go:215] "Topology Admit Handler" podUID="d0a3ab29-c39e-48e3-8a1b-b64572e1729f" podNamespace="default" podName="busybox-5bc68d56bd-fhrtd"
	Aug 30 20:27:03 multinode-944570 kubelet[2360]: I0830 20:27:03.375645    2360 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j47wk\" (UniqueName: \"kubernetes.io/projected/d0a3ab29-c39e-48e3-8a1b-b64572e1729f-kube-api-access-j47wk\") pod \"busybox-5bc68d56bd-fhrtd\" (UID: \"d0a3ab29-c39e-48e3-8a1b-b64572e1729f\") " pod="default/busybox-5bc68d56bd-fhrtd"
	Aug 30 20:27:06 multinode-944570 kubelet[2360]: I0830 20:27:06.608752    2360 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-5bc68d56bd-fhrtd" podStartSLOduration=1.706541547 podCreationTimestamp="2023-08-30 20:27:03 +0000 UTC" firstStartedPulling="2023-08-30 20:27:04.151331512 +0000 UTC m=+98.483141626" lastFinishedPulling="2023-08-30 20:27:06.052436217 +0000 UTC m=+100.384246343" observedRunningTime="2023-08-30 20:27:06.607311977 +0000 UTC m=+100.939122107" watchObservedRunningTime="2023-08-30 20:27:06.607646264 +0000 UTC m=+100.939456397"
	Aug 30 20:27:26 multinode-944570 kubelet[2360]: E0830 20:27:26.045773    2360 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 30 20:27:26 multinode-944570 kubelet[2360]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 30 20:27:26 multinode-944570 kubelet[2360]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 30 20:27:26 multinode-944570 kubelet[2360]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 30 20:28:26 multinode-944570 kubelet[2360]: E0830 20:28:26.048373    2360 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 30 20:28:26 multinode-944570 kubelet[2360]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 30 20:28:26 multinode-944570 kubelet[2360]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 30 20:28:26 multinode-944570 kubelet[2360]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-944570 -n multinode-944570
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-944570 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StartAfterStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StartAfterStop (20.66s)

                                                
                                    

Test pass (285/317)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 22.37
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.06
10 TestDownloadOnly/v1.28.1/json-events 13.58
11 TestDownloadOnly/v1.28.1/preload-exists 0
15 TestDownloadOnly/v1.28.1/LogsDuration 0.06
16 TestDownloadOnly/DeleteAll 0.13
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.12
19 TestBinaryMirror 0.57
20 TestOffline 93.56
22 TestAddons/Setup 220.12
24 TestAddons/parallel/Registry 17.01
25 TestAddons/parallel/Ingress 21.86
26 TestAddons/parallel/InspektorGadget 10.97
27 TestAddons/parallel/MetricsServer 6.15
28 TestAddons/parallel/HelmTiller 12.52
30 TestAddons/parallel/CSI 65.36
31 TestAddons/parallel/Headlamp 16.44
32 TestAddons/parallel/CloudSpanner 5.49
35 TestAddons/serial/GCPAuth/Namespaces 0.12
36 TestAddons/StoppedEnableDisable 13.35
37 TestCertOptions 66.21
38 TestCertExpiration 351.57
39 TestDockerFlags 98.56
40 TestForceSystemdFlag 50.6
41 TestForceSystemdEnv 114.35
43 TestKVMDriverInstallOrUpdate 3.76
47 TestErrorSpam/setup 49.26
48 TestErrorSpam/start 0.33
49 TestErrorSpam/status 0.72
50 TestErrorSpam/pause 1.16
51 TestErrorSpam/unpause 1.24
52 TestErrorSpam/stop 12.5
55 TestFunctional/serial/CopySyncFile 0
56 TestFunctional/serial/StartWithProxy 65.84
57 TestFunctional/serial/AuditLog 0
58 TestFunctional/serial/SoftStart 40.99
59 TestFunctional/serial/KubeContext 0.04
60 TestFunctional/serial/KubectlGetPods 0.07
63 TestFunctional/serial/CacheCmd/cache/add_remote 3.56
64 TestFunctional/serial/CacheCmd/cache/add_local 1.69
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
66 TestFunctional/serial/CacheCmd/cache/list 0.04
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
68 TestFunctional/serial/CacheCmd/cache/cache_reload 1.51
69 TestFunctional/serial/CacheCmd/cache/delete 0.1
70 TestFunctional/serial/MinikubeKubectlCmd 0.1
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
72 TestFunctional/serial/ExtraConfig 42.64
73 TestFunctional/serial/ComponentHealth 0.07
74 TestFunctional/serial/LogsCmd 1.05
75 TestFunctional/serial/LogsFileCmd 1.01
76 TestFunctional/serial/InvalidService 4.39
78 TestFunctional/parallel/ConfigCmd 0.29
79 TestFunctional/parallel/DashboardCmd 17.9
80 TestFunctional/parallel/DryRun 0.26
81 TestFunctional/parallel/InternationalLanguage 0.14
82 TestFunctional/parallel/StatusCmd 0.79
86 TestFunctional/parallel/ServiceCmdConnect 23.51
87 TestFunctional/parallel/AddonsCmd 0.15
88 TestFunctional/parallel/PersistentVolumeClaim 58.11
90 TestFunctional/parallel/SSHCmd 0.5
91 TestFunctional/parallel/CpCmd 0.96
92 TestFunctional/parallel/MySQL 37.32
93 TestFunctional/parallel/FileSync 0.21
94 TestFunctional/parallel/CertSync 1.32
98 TestFunctional/parallel/NodeLabels 0.08
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.22
102 TestFunctional/parallel/License 1.27
103 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
104 TestFunctional/parallel/ImageCommands/ImageListTable 0.34
105 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
106 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
107 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
108 TestFunctional/parallel/ImageCommands/ImageBuild 4.08
109 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
110 TestFunctional/parallel/DockerEnv/bash 0.83
111 TestFunctional/parallel/ImageCommands/Setup 2.38
112 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
113 TestFunctional/parallel/Version/short 0.05
114 TestFunctional/parallel/Version/components 0.55
124 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.34
125 TestFunctional/parallel/MountCmd/any-port 27.57
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.56
127 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.54
128 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.61
129 TestFunctional/parallel/ImageCommands/ImageRemove 0.47
130 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.63
131 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.91
132 TestFunctional/parallel/MountCmd/specific-port 1.75
133 TestFunctional/parallel/MountCmd/VerifyCleanup 0.89
134 TestFunctional/parallel/ServiceCmd/DeployApp 13.2
135 TestFunctional/parallel/ProfileCmd/profile_not_create 0.33
136 TestFunctional/parallel/ProfileCmd/profile_list 0.31
137 TestFunctional/parallel/ProfileCmd/profile_json_output 0.25
138 TestFunctional/parallel/ServiceCmd/List 1.26
139 TestFunctional/parallel/ServiceCmd/JSONOutput 1.28
140 TestFunctional/parallel/ServiceCmd/HTTPS 0.31
141 TestFunctional/parallel/ServiceCmd/Format 0.32
142 TestFunctional/parallel/ServiceCmd/URL 0.32
143 TestFunctional/delete_addon-resizer_images 0.07
144 TestFunctional/delete_my-image_image 0.01
145 TestFunctional/delete_minikube_cached_images 0.01
146 TestGvisorAddon 319.39
149 TestImageBuild/serial/Setup 50.61
150 TestImageBuild/serial/NormalBuild 2.32
151 TestImageBuild/serial/BuildWithBuildArg 1.18
152 TestImageBuild/serial/BuildWithDockerIgnore 0.36
153 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.29
156 TestIngressAddonLegacy/StartLegacyK8sCluster 89.85
158 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 13.4
159 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.52
160 TestIngressAddonLegacy/serial/ValidateIngressAddons 34.17
163 TestJSONOutput/start/Command 67.67
164 TestJSONOutput/start/Audit 0
166 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
167 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
169 TestJSONOutput/pause/Command 0.57
170 TestJSONOutput/pause/Audit 0
172 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
173 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
175 TestJSONOutput/unpause/Command 0.52
176 TestJSONOutput/unpause/Audit 0
178 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
179 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/stop/Command 13.11
182 TestJSONOutput/stop/Audit 0
184 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
186 TestErrorJSONOutput 0.19
191 TestMainNoArgs 0.04
192 TestMinikubeProfile 103.66
195 TestMountStart/serial/StartWithMountFirst 31.8
196 TestMountStart/serial/VerifyMountFirst 0.39
197 TestMountStart/serial/StartWithMountSecond 29.67
198 TestMountStart/serial/VerifyMountSecond 0.38
199 TestMountStart/serial/DeleteFirst 0.89
200 TestMountStart/serial/VerifyMountPostDelete 0.37
201 TestMountStart/serial/Stop 2.42
202 TestMountStart/serial/RestartStopped 26.19
203 TestMountStart/serial/VerifyMountPostStop 0.38
206 TestMultiNode/serial/FreshStart2Nodes 144.74
207 TestMultiNode/serial/DeployApp2Nodes 5.13
208 TestMultiNode/serial/PingHostFrom2Pods 0.87
209 TestMultiNode/serial/AddNode 46.1
210 TestMultiNode/serial/ProfileList 0.21
211 TestMultiNode/serial/CopyFile 7.31
212 TestMultiNode/serial/StopNode 3.97
214 TestMultiNode/serial/RestartKeepsNodes 259.86
215 TestMultiNode/serial/DeleteNode 1.73
216 TestMultiNode/serial/StopMultiNode 25.51
217 TestMultiNode/serial/RestartMultiNode 100.84
218 TestMultiNode/serial/ValidateNameConflict 49.26
223 TestPreload 232.82
225 TestScheduledStopUnix 124.41
226 TestSkaffold 140.19
229 TestRunningBinaryUpgrade 266.73
231 TestKubernetesUpgrade 174.9
244 TestStoppedBinaryUpgrade/Setup 1.79
245 TestStoppedBinaryUpgrade/Upgrade 204.81
247 TestPause/serial/Start 88.58
256 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
257 TestNoKubernetes/serial/StartWithK8s 59.92
258 TestNetworkPlugins/group/auto/Start 82.41
259 TestPause/serial/SecondStartNoReconfiguration 53.06
260 TestNoKubernetes/serial/StartWithStopK8s 42.6
261 TestPause/serial/Pause 0.66
262 TestPause/serial/VerifyStatus 0.27
263 TestPause/serial/Unpause 0.69
264 TestPause/serial/PauseAgain 1
265 TestPause/serial/DeletePaused 1.33
266 TestPause/serial/VerifyDeletedResources 0.67
267 TestNetworkPlugins/group/kindnet/Start 78.15
268 TestNetworkPlugins/group/auto/KubeletFlags 0.27
269 TestNetworkPlugins/group/auto/NetCatPod 12.43
270 TestNoKubernetes/serial/Start 49.3
271 TestStoppedBinaryUpgrade/MinikubeLogs 1.29
272 TestNetworkPlugins/group/auto/DNS 0.2
273 TestNetworkPlugins/group/auto/Localhost 0.14
274 TestNetworkPlugins/group/auto/HairPin 0.17
275 TestNetworkPlugins/group/calico/Start 130.4
276 TestNetworkPlugins/group/custom-flannel/Start 127.9
277 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
278 TestNoKubernetes/serial/ProfileList 0.92
279 TestNoKubernetes/serial/Stop 2.17
280 TestNoKubernetes/serial/StartNoArgs 79.69
281 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
282 TestNetworkPlugins/group/kindnet/KubeletFlags 0.33
283 TestNetworkPlugins/group/kindnet/NetCatPod 13.58
284 TestNetworkPlugins/group/kindnet/DNS 0.19
285 TestNetworkPlugins/group/kindnet/Localhost 0.15
286 TestNetworkPlugins/group/kindnet/HairPin 0.15
287 TestNetworkPlugins/group/false/Start 96.71
288 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.23
289 TestNetworkPlugins/group/enable-default-cni/Start 98.65
290 TestNetworkPlugins/group/calico/ControllerPod 5.04
291 TestNetworkPlugins/group/calico/KubeletFlags 0.23
292 TestNetworkPlugins/group/calico/NetCatPod 12.75
293 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.21
294 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.34
295 TestNetworkPlugins/group/calico/DNS 0.44
296 TestNetworkPlugins/group/calico/Localhost 0.34
297 TestNetworkPlugins/group/calico/HairPin 0.24
298 TestNetworkPlugins/group/custom-flannel/DNS 0.23
299 TestNetworkPlugins/group/custom-flannel/Localhost 0.22
300 TestNetworkPlugins/group/custom-flannel/HairPin 0.23
301 TestNetworkPlugins/group/flannel/Start 89.47
302 TestNetworkPlugins/group/bridge/Start 100.72
303 TestNetworkPlugins/group/false/KubeletFlags 0.21
304 TestNetworkPlugins/group/false/NetCatPod 11.38
305 TestNetworkPlugins/group/false/DNS 0.28
306 TestNetworkPlugins/group/false/Localhost 0.22
307 TestNetworkPlugins/group/false/HairPin 0.25
308 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.24
309 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.47
310 TestNetworkPlugins/group/kubenet/Start 110.44
311 TestNetworkPlugins/group/enable-default-cni/DNS 0.24
312 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
313 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
315 TestStartStop/group/old-k8s-version/serial/FirstStart 147.69
316 TestNetworkPlugins/group/flannel/ControllerPod 5.03
317 TestNetworkPlugins/group/flannel/KubeletFlags 0.24
318 TestNetworkPlugins/group/flannel/NetCatPod 14.43
319 TestNetworkPlugins/group/bridge/KubeletFlags 0.21
320 TestNetworkPlugins/group/bridge/NetCatPod 12.33
321 TestNetworkPlugins/group/flannel/DNS 0.21
322 TestNetworkPlugins/group/flannel/Localhost 0.18
323 TestNetworkPlugins/group/flannel/HairPin 0.15
324 TestNetworkPlugins/group/bridge/DNS 0.23
325 TestNetworkPlugins/group/bridge/Localhost 0.19
326 TestNetworkPlugins/group/bridge/HairPin 0.16
328 TestStartStop/group/no-preload/serial/FirstStart 93.22
330 TestStartStop/group/embed-certs/serial/FirstStart 97.21
331 TestNetworkPlugins/group/kubenet/KubeletFlags 0.26
332 TestNetworkPlugins/group/kubenet/NetCatPod 14.43
333 TestNetworkPlugins/group/kubenet/DNS 0.18
334 TestNetworkPlugins/group/kubenet/Localhost 0.17
335 TestNetworkPlugins/group/kubenet/HairPin 0.17
337 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 72.89
338 TestStartStop/group/no-preload/serial/DeployApp 10.53
339 TestStartStop/group/old-k8s-version/serial/DeployApp 12.55
340 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.94
341 TestStartStop/group/embed-certs/serial/DeployApp 10.61
342 TestStartStop/group/no-preload/serial/Stop 13.2
343 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.87
344 TestStartStop/group/old-k8s-version/serial/Stop 13.14
345 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.17
346 TestStartStop/group/embed-certs/serial/Stop 13.12
347 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
348 TestStartStop/group/no-preload/serial/SecondStart 308.37
349 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
350 TestStartStop/group/old-k8s-version/serial/SecondStart 72.63
351 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
352 TestStartStop/group/embed-certs/serial/SecondStart 343.37
353 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.46
354 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.07
355 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.14
356 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.28
357 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 354.81
358 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 25.02
359 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.13
360 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.42
361 TestStartStop/group/old-k8s-version/serial/Pause 2.94
363 TestStartStop/group/newest-cni/serial/FirstStart 73.16
364 TestStartStop/group/newest-cni/serial/DeployApp 0
365 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.02
366 TestStartStop/group/newest-cni/serial/Stop 8.11
367 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
368 TestStartStop/group/newest-cni/serial/SecondStart 47
369 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
370 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
371 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
372 TestStartStop/group/newest-cni/serial/Pause 2.44
373 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.02
374 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
375 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
376 TestStartStop/group/no-preload/serial/Pause 2.46
377 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.02
378 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
379 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
380 TestStartStop/group/embed-certs/serial/Pause 2.41
381 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.02
382 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
383 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
384 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.39
x
+
TestDownloadOnly/v1.16.0/json-events (22.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-331287 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-331287 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=kvm2 : (22.372645996s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (22.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-331287
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-331287: exit status 85 (60.829919ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-331287 | jenkins | v1.31.2 | 30 Aug 23 20:05 UTC |          |
	|         | -p download-only-331287        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/30 20:05:41
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0830 20:05:41.560956  229359 out.go:296] Setting OutFile to fd 1 ...
	I0830 20:05:41.561103  229359 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 20:05:41.561113  229359 out.go:309] Setting ErrFile to fd 2...
	I0830 20:05:41.561117  229359 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 20:05:41.561303  229359 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17145-222139/.minikube/bin
	W0830 20:05:41.561419  229359 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17145-222139/.minikube/config/config.json: open /home/jenkins/minikube-integration/17145-222139/.minikube/config/config.json: no such file or directory
	I0830 20:05:41.562002  229359 out.go:303] Setting JSON to true
	I0830 20:05:41.563022  229359 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6484,"bootTime":1693419458,"procs":403,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0830 20:05:41.563097  229359 start.go:138] virtualization: kvm guest
	I0830 20:05:41.566029  229359 out.go:97] [download-only-331287] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0830 20:05:41.567780  229359 out.go:169] MINIKUBE_LOCATION=17145
	W0830 20:05:41.566170  229359 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17145-222139/.minikube/cache/preloaded-tarball: no such file or directory
	I0830 20:05:41.566291  229359 notify.go:220] Checking for updates...
	I0830 20:05:41.570971  229359 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0830 20:05:41.572539  229359 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17145-222139/kubeconfig
	I0830 20:05:41.573998  229359 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17145-222139/.minikube
	I0830 20:05:41.575571  229359 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0830 20:05:41.578238  229359 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0830 20:05:41.578434  229359 driver.go:373] Setting default libvirt URI to qemu:///system
	I0830 20:05:42.132038  229359 out.go:97] Using the kvm2 driver based on user configuration
	I0830 20:05:42.132070  229359 start.go:298] selected driver: kvm2
	I0830 20:05:42.132077  229359 start.go:902] validating driver "kvm2" against <nil>
	I0830 20:05:42.132408  229359 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 20:05:42.132524  229359 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17145-222139/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0830 20:05:42.146787  229359 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0830 20:05:42.146843  229359 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0830 20:05:42.147312  229359 start_flags.go:382] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0830 20:05:42.147448  229359 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0830 20:05:42.147485  229359 cni.go:84] Creating CNI manager for ""
	I0830 20:05:42.147498  229359 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0830 20:05:42.147504  229359 start_flags.go:319] config:
	{Name:download-only-331287 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-331287 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 20:05:42.147712  229359 iso.go:125] acquiring lock: {Name:mk193fbe19fd874a72f32d45bb0f490410c0429c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 20:05:42.149877  229359 out.go:97] Downloading VM boot image ...
	I0830 20:05:42.149919  229359 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17145-222139/.minikube/cache/iso/amd64/minikube-v1.31.0-1692872107-17120-amd64.iso
	I0830 20:05:51.142627  229359 out.go:97] Starting control plane node download-only-331287 in cluster download-only-331287
	I0830 20:05:51.142649  229359 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0830 20:05:51.240813  229359 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0830 20:05:51.240846  229359 cache.go:57] Caching tarball of preloaded images
	I0830 20:05:51.241042  229359 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0830 20:05:51.243146  229359 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0830 20:05:51.243168  229359 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0830 20:05:51.349229  229359 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /home/jenkins/minikube-integration/17145-222139/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-331287"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/json-events (13.58s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-331287 --force --alsologtostderr --kubernetes-version=v1.28.1 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-331287 --force --alsologtostderr --kubernetes-version=v1.28.1 --container-runtime=docker --driver=kvm2 : (13.577009722s)
--- PASS: TestDownloadOnly/v1.28.1/json-events (13.58s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/preload-exists
--- PASS: TestDownloadOnly/v1.28.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-331287
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-331287: exit status 85 (58.695034ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-331287 | jenkins | v1.31.2 | 30 Aug 23 20:05 UTC |          |
	|         | -p download-only-331287        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-331287 | jenkins | v1.31.2 | 30 Aug 23 20:06 UTC |          |
	|         | -p download-only-331287        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.1   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/30 20:06:03
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0830 20:06:03.996846  229466 out.go:296] Setting OutFile to fd 1 ...
	I0830 20:06:03.996939  229466 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 20:06:03.996946  229466 out.go:309] Setting ErrFile to fd 2...
	I0830 20:06:03.996950  229466 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 20:06:03.997130  229466 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17145-222139/.minikube/bin
	W0830 20:06:03.997244  229466 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17145-222139/.minikube/config/config.json: open /home/jenkins/minikube-integration/17145-222139/.minikube/config/config.json: no such file or directory
	I0830 20:06:03.997672  229466 out.go:303] Setting JSON to true
	I0830 20:06:03.998664  229466 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6506,"bootTime":1693419458,"procs":398,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0830 20:06:03.998725  229466 start.go:138] virtualization: kvm guest
	I0830 20:06:04.000849  229466 out.go:97] [download-only-331287] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0830 20:06:04.002479  229466 out.go:169] MINIKUBE_LOCATION=17145
	I0830 20:06:04.001052  229466 notify.go:220] Checking for updates...
	I0830 20:06:04.005576  229466 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0830 20:06:04.007090  229466 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17145-222139/kubeconfig
	I0830 20:06:04.009362  229466 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17145-222139/.minikube
	I0830 20:06:04.010863  229466 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0830 20:06:04.013401  229466 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0830 20:06:04.013796  229466 config.go:182] Loaded profile config "download-only-331287": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0830 20:06:04.013841  229466 start.go:810] api.Load failed for download-only-331287: filestore "download-only-331287": Docker machine "download-only-331287" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0830 20:06:04.013919  229466 driver.go:373] Setting default libvirt URI to qemu:///system
	W0830 20:06:04.013945  229466 start.go:810] api.Load failed for download-only-331287: filestore "download-only-331287": Docker machine "download-only-331287" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0830 20:06:04.049176  229466 out.go:97] Using the kvm2 driver based on existing profile
	I0830 20:06:04.049216  229466 start.go:298] selected driver: kvm2
	I0830 20:06:04.049223  229466 start.go:902] validating driver "kvm2" against &{Name:download-only-331287 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.16.0 ClusterName:download-only-331287 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 20:06:04.049611  229466 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 20:06:04.049677  229466 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17145-222139/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0830 20:06:04.066008  229466 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0830 20:06:04.066655  229466 cni.go:84] Creating CNI manager for ""
	I0830 20:06:04.066676  229466 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0830 20:06:04.066687  229466 start_flags.go:319] config:
	{Name:download-only-331287 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:download-only-331287 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Sock
etVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 20:06:04.066822  229466 iso.go:125] acquiring lock: {Name:mk193fbe19fd874a72f32d45bb0f490410c0429c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 20:06:04.068511  229466 out.go:97] Starting control plane node download-only-331287 in cluster download-only-331287
	I0830 20:06:04.068531  229466 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0830 20:06:04.476831  229466 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.1/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-amd64.tar.lz4
	I0830 20:06:04.476869  229466 cache.go:57] Caching tarball of preloaded images
	I0830 20:06:04.477036  229466 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0830 20:06:04.479048  229466 out.go:97] Downloading Kubernetes v1.28.1 preload ...
	I0830 20:06:04.479069  229466 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.1-docker-overlay2-amd64.tar.lz4 ...
	I0830 20:06:04.583434  229466 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.1/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-amd64.tar.lz4?checksum=md5:e86539672b8ce9a3040455131c2fbb87 -> /home/jenkins/minikube-integration/17145-222139/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-331287"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-331287
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-500567 --alsologtostderr --binary-mirror http://127.0.0.1:44823 --driver=kvm2 
helpers_test.go:175: Cleaning up "binary-mirror-500567" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-500567
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestOffline (93.56s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-319513 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-319513 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 : (1m32.519137516s)
helpers_test.go:175: Cleaning up "offline-docker-319513" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-319513
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-319513: (1.043401934s)
--- PASS: TestOffline (93.56s)

                                                
                                    
x
+
TestAddons/Setup (220.12s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p addons-120922 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p addons-120922 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m40.119080009s)
--- PASS: TestAddons/Setup (220.12s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:306: registry stabilized in 21.787831ms
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-2b5hr" [03856d95-dae3-486c-971f-de7a0522e017] Running
addons_test.go:308: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.017918746s
addons_test.go:311: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-tbw9d" [a83b2df3-8c28-4217-b60d-c1f0581869cd] Running
addons_test.go:311: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.018917031s
addons_test.go:316: (dbg) Run:  kubectl --context addons-120922 delete po -l run=registry-test --now
addons_test.go:321: (dbg) Run:  kubectl --context addons-120922 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:321: (dbg) Done: kubectl --context addons-120922 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.074965314s)
addons_test.go:335: (dbg) Run:  out/minikube-linux-amd64 -p addons-120922 ip
2023/08/30 20:10:14 [DEBUG] GET http://192.168.39.117:5000
addons_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p addons-120922 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.01s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-120922 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:208: (dbg) Run:  kubectl --context addons-120922 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context addons-120922 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [88e9b18e-dcc5-44ff-b576-b36e63bfe9ef] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [88e9b18e-dcc5-44ff-b576-b36e63bfe9ef] Running
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.013323737s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p addons-120922 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context addons-120922 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p addons-120922 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.39.117
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p addons-120922 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p addons-120922 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p addons-120922 addons disable ingress --alsologtostderr -v=1: (7.792450712s)
--- PASS: TestAddons/parallel/Ingress (21.86s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.97s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-68pv6" [5a0a57c1-a91b-41fc-a24d-e7e38e6668d7] Running
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.036109356s
addons_test.go:817: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-120922
addons_test.go:817: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-120922: (5.926747655s)
--- PASS: TestAddons/parallel/InspektorGadget (10.97s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.15s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 21.694493ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-b8lmf" [0b26c50d-b67c-4fcb-9ff2-a2a8e2fc9bc9] Running
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.021139256s
addons_test.go:391: (dbg) Run:  kubectl --context addons-120922 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p addons-120922 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p addons-120922 addons disable metrics-server --alsologtostderr -v=1: (1.025444313s)
--- PASS: TestAddons/parallel/MetricsServer (6.15s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (12.52s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:432: tiller-deploy stabilized in 4.318017ms
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-485tg" [40adbde1-a662-476e-b544-c195b2a533a5] Running
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.017235283s
addons_test.go:449: (dbg) Run:  kubectl --context addons-120922 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:449: (dbg) Done: kubectl --context addons-120922 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.927091559s)
addons_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p addons-120922 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (12.52s)

                                                
                                    
x
+
TestAddons/parallel/CSI (65.36s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 5.789405ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-120922 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-120922 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-120922 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-120922 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-120922 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-120922 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-120922 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-120922 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-120922 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-120922 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-120922 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-120922 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-120922 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-120922 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-120922 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [6e3eacea-8cf1-4fb7-b076-d4e0eaa0ed6f] Pending
helpers_test.go:344: "task-pv-pod" [6e3eacea-8cf1-4fb7-b076-d4e0eaa0ed6f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [6e3eacea-8cf1-4fb7-b076-d4e0eaa0ed6f] Running
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 16.018502602s
addons_test.go:560: (dbg) Run:  kubectl --context addons-120922 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-120922 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-120922 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-120922 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:570: (dbg) Run:  kubectl --context addons-120922 delete pod task-pv-pod
addons_test.go:570: (dbg) Done: kubectl --context addons-120922 delete pod task-pv-pod: (1.314975647s)
addons_test.go:576: (dbg) Run:  kubectl --context addons-120922 delete pvc hpvc
addons_test.go:582: (dbg) Run:  kubectl --context addons-120922 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-120922 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-120922 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-120922 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-120922 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-120922 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-120922 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-120922 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-120922 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-120922 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-120922 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-120922 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-120922 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-120922 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-120922 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-120922 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-120922 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-120922 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-120922 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-120922 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-120922 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-120922 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:597: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [5667f7f9-82ef-4ea6-b63c-de016439448e] Pending
helpers_test.go:344: "task-pv-pod-restore" [5667f7f9-82ef-4ea6-b63c-de016439448e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [5667f7f9-82ef-4ea6-b63c-de016439448e] Running
addons_test.go:597: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.025920863s
addons_test.go:602: (dbg) Run:  kubectl --context addons-120922 delete pod task-pv-pod-restore
addons_test.go:606: (dbg) Run:  kubectl --context addons-120922 delete pvc hpvc-restore
addons_test.go:610: (dbg) Run:  kubectl --context addons-120922 delete volumesnapshot new-snapshot-demo
addons_test.go:614: (dbg) Run:  out/minikube-linux-amd64 -p addons-120922 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:614: (dbg) Done: out/minikube-linux-amd64 -p addons-120922 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.602496276s)
addons_test.go:618: (dbg) Run:  out/minikube-linux-amd64 -p addons-120922 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (65.36s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.44s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-120922 --alsologtostderr -v=1
addons_test.go:800: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-120922 --alsologtostderr -v=1: (1.405343843s)
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-699c48fb74-dxdmf" [ac716894-dd68-4b63-8fd2-951232a451b7] Pending
helpers_test.go:344: "headlamp-699c48fb74-dxdmf" [ac716894-dd68-4b63-8fd2-951232a451b7] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-699c48fb74-dxdmf" [ac716894-dd68-4b63-8fd2-951232a451b7] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 15.035843164s
--- PASS: TestAddons/parallel/Headlamp (16.44s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.49s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6dcc56475c-rp7n7" [ac5084f1-d9cb-458e-a35c-685532ff6bcc] Running
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.009275484s
addons_test.go:836: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-120922
--- PASS: TestAddons/parallel/CloudSpanner (5.49s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-120922 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-120922 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.35s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-120922
addons_test.go:148: (dbg) Done: out/minikube-linux-amd64 stop -p addons-120922: (13.093071722s)
addons_test.go:152: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-120922
addons_test.go:156: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-120922
addons_test.go:161: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-120922
--- PASS: TestAddons/StoppedEnableDisable (13.35s)

                                                
                                    
x
+
TestCertOptions (66.21s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-979391 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 
E0830 20:49:58.659067  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/addons-120922/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-979391 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (1m3.657258555s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-979391 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-979391 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-979391 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-979391" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-979391
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-979391: (1.676292133s)
--- PASS: TestCertOptions (66.21s)

                                                
                                    
x
+
TestCertExpiration (351.57s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-410608 --memory=2048 --cert-expiration=3m --driver=kvm2 
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-410608 --memory=2048 --cert-expiration=3m --driver=kvm2 : (1m42.254223881s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-410608 --memory=2048 --cert-expiration=8760h --driver=kvm2 
E0830 20:48:54.037151  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/skaffold-318104/client.crt: no such file or directory
E0830 20:48:54.042469  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/skaffold-318104/client.crt: no such file or directory
E0830 20:48:54.052732  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/skaffold-318104/client.crt: no such file or directory
E0830 20:48:54.073019  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/skaffold-318104/client.crt: no such file or directory
E0830 20:48:54.113372  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/skaffold-318104/client.crt: no such file or directory
E0830 20:48:54.193728  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/skaffold-318104/client.crt: no such file or directory
E0830 20:48:54.354573  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/skaffold-318104/client.crt: no such file or directory
E0830 20:48:54.675229  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/skaffold-318104/client.crt: no such file or directory
E0830 20:48:55.315945  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/skaffold-318104/client.crt: no such file or directory
E0830 20:48:56.596194  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/skaffold-318104/client.crt: no such file or directory
E0830 20:48:59.156574  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/skaffold-318104/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-410608 --memory=2048 --cert-expiration=8760h --driver=kvm2 : (1m8.226971668s)
helpers_test.go:175: Cleaning up "cert-expiration-410608" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-410608
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-410608: (1.083161912s)
--- PASS: TestCertExpiration (351.57s)

                                                
                                    
x
+
TestDockerFlags (98.56s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-146519 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 
E0830 20:45:43.392972  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/functional-037297/client.crt: no such file or directory
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-146519 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (1m37.072757659s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-146519 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-146519 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-146519" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-146519
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-146519: (1.054946834s)
--- PASS: TestDockerFlags (98.56s)

                                                
                                    
x
+
TestForceSystemdFlag (50.6s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-383510 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-383510 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (49.359665528s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-383510 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-383510" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-383510
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-383510: (1.019233579s)
--- PASS: TestForceSystemdFlag (50.60s)

                                                
                                    
x
+
TestForceSystemdEnv (114.35s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-949230 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-949230 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 : (1m52.540818815s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-949230 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-949230" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-949230
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-949230: (1.421968633s)
--- PASS: TestForceSystemdEnv (114.35s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.76s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
E0830 20:44:58.659074  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/addons-120922/client.crt: no such file or directory
--- PASS: TestKVMDriverInstallOrUpdate (3.76s)

                                                
                                    
x
+
TestErrorSpam/setup (49.26s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-874076 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-874076 --driver=kvm2 
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-874076 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-874076 --driver=kvm2 : (49.258343546s)
--- PASS: TestErrorSpam/setup (49.26s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-874076 --log_dir /tmp/nospam-874076 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-874076 --log_dir /tmp/nospam-874076 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-874076 --log_dir /tmp/nospam-874076 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.72s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-874076 --log_dir /tmp/nospam-874076 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-874076 --log_dir /tmp/nospam-874076 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-874076 --log_dir /tmp/nospam-874076 status
--- PASS: TestErrorSpam/status (0.72s)

                                                
                                    
x
+
TestErrorSpam/pause (1.16s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-874076 --log_dir /tmp/nospam-874076 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-874076 --log_dir /tmp/nospam-874076 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-874076 --log_dir /tmp/nospam-874076 pause
--- PASS: TestErrorSpam/pause (1.16s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.24s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-874076 --log_dir /tmp/nospam-874076 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-874076 --log_dir /tmp/nospam-874076 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-874076 --log_dir /tmp/nospam-874076 unpause
--- PASS: TestErrorSpam/unpause (1.24s)

                                                
                                    
x
+
TestErrorSpam/stop (12.5s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-874076 --log_dir /tmp/nospam-874076 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-874076 --log_dir /tmp/nospam-874076 stop: (12.363202143s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-874076 --log_dir /tmp/nospam-874076 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-874076 --log_dir /tmp/nospam-874076 stop
--- PASS: TestErrorSpam/stop (12.50s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17145-222139/.minikube/files/etc/test/nested/copy/229347/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (65.84s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-037297 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-037297 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 : (1m5.843923742s)
--- PASS: TestFunctional/serial/StartWithProxy (65.84s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (40.99s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-037297 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-037297 --alsologtostderr -v=8: (40.990478144s)
functional_test.go:659: soft start took 40.991219248s for "functional-037297" cluster.
--- PASS: TestFunctional/serial/SoftStart (40.99s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-037297 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.56s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-037297 cache add registry.k8s.io/pause:3.1: (1.199357878s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-037297 cache add registry.k8s.io/pause:3.3: (1.187046876s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-037297 cache add registry.k8s.io/pause:latest: (1.171680927s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.56s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.69s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-037297 /tmp/TestFunctionalserialCacheCmdcacheadd_local881731964/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 cache add minikube-local-cache-test:functional-037297
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-037297 cache add minikube-local-cache-test:functional-037297: (1.375851496s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 cache delete minikube-local-cache-test:functional-037297
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-037297
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.69s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-037297 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (217.861311ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.51s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 kubectl -- --context functional-037297 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-037297 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.64s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-037297 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0830 20:14:58.662926  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/addons-120922/client.crt: no such file or directory
E0830 20:14:58.668862  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/addons-120922/client.crt: no such file or directory
E0830 20:14:58.679117  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/addons-120922/client.crt: no such file or directory
E0830 20:14:58.699378  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/addons-120922/client.crt: no such file or directory
E0830 20:14:58.739713  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/addons-120922/client.crt: no such file or directory
E0830 20:14:58.820048  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/addons-120922/client.crt: no such file or directory
E0830 20:14:58.980638  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/addons-120922/client.crt: no such file or directory
E0830 20:14:59.301277  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/addons-120922/client.crt: no such file or directory
E0830 20:14:59.942337  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/addons-120922/client.crt: no such file or directory
E0830 20:15:01.222837  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/addons-120922/client.crt: no such file or directory
E0830 20:15:03.784606  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/addons-120922/client.crt: no such file or directory
E0830 20:15:08.905495  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/addons-120922/client.crt: no such file or directory
E0830 20:15:19.146729  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/addons-120922/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-037297 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.639269052s)
functional_test.go:757: restart took 42.639401147s for "functional-037297" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (42.64s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-037297 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.05s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-037297 logs: (1.053965152s)
--- PASS: TestFunctional/serial/LogsCmd (1.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.01s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 logs --file /tmp/TestFunctionalserialLogsFileCmd2245892975/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-037297 logs --file /tmp/TestFunctionalserialLogsFileCmd2245892975/001/logs.txt: (1.013210511s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.01s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.39s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-037297 apply -f testdata/invalidsvc.yaml
E0830 20:15:39.627396  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/addons-120922/client.crt: no such file or directory
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-037297
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-037297: exit status 115 (280.841254ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.169:32587 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-037297 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.39s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-037297 config get cpus: exit status 14 (45.858644ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-037297 config get cpus: exit status 14 (43.810581ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (17.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-037297 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-037297 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 236338: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (17.90s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-037297 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-037297 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (129.779258ms)

                                                
                                                
-- stdout --
	* [functional-037297] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17145
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17145-222139/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17145-222139/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0830 20:16:22.325977  236246 out.go:296] Setting OutFile to fd 1 ...
	I0830 20:16:22.326080  236246 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 20:16:22.326088  236246 out.go:309] Setting ErrFile to fd 2...
	I0830 20:16:22.326092  236246 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 20:16:22.326269  236246 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17145-222139/.minikube/bin
	I0830 20:16:22.326775  236246 out.go:303] Setting JSON to false
	I0830 20:16:22.327735  236246 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":7124,"bootTime":1693419458,"procs":229,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0830 20:16:22.327796  236246 start.go:138] virtualization: kvm guest
	I0830 20:16:22.329793  236246 out.go:177] * [functional-037297] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0830 20:16:22.331589  236246 out.go:177]   - MINIKUBE_LOCATION=17145
	I0830 20:16:22.332827  236246 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0830 20:16:22.331631  236246 notify.go:220] Checking for updates...
	I0830 20:16:22.334408  236246 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17145-222139/kubeconfig
	I0830 20:16:22.335849  236246 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17145-222139/.minikube
	I0830 20:16:22.337263  236246 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0830 20:16:22.339856  236246 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0830 20:16:22.341383  236246 config.go:182] Loaded profile config "functional-037297": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0830 20:16:22.341716  236246 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0830 20:16:22.341754  236246 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 20:16:22.357688  236246 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38107
	I0830 20:16:22.358148  236246 main.go:141] libmachine: () Calling .GetVersion
	I0830 20:16:22.358798  236246 main.go:141] libmachine: Using API Version  1
	I0830 20:16:22.358836  236246 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 20:16:22.359169  236246 main.go:141] libmachine: () Calling .GetMachineName
	I0830 20:16:22.359374  236246 main.go:141] libmachine: (functional-037297) Calling .DriverName
	I0830 20:16:22.359601  236246 driver.go:373] Setting default libvirt URI to qemu:///system
	I0830 20:16:22.359913  236246 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0830 20:16:22.359962  236246 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 20:16:22.374054  236246 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42981
	I0830 20:16:22.374420  236246 main.go:141] libmachine: () Calling .GetVersion
	I0830 20:16:22.374915  236246 main.go:141] libmachine: Using API Version  1
	I0830 20:16:22.374938  236246 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 20:16:22.375359  236246 main.go:141] libmachine: () Calling .GetMachineName
	I0830 20:16:22.375566  236246 main.go:141] libmachine: (functional-037297) Calling .DriverName
	I0830 20:16:22.407081  236246 out.go:177] * Using the kvm2 driver based on existing profile
	I0830 20:16:22.408336  236246 start.go:298] selected driver: kvm2
	I0830 20:16:22.408354  236246 start.go:902] validating driver "kvm2" against &{Name:functional-037297 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.1 ClusterName:functional-037297 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.169 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 C
ertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 20:16:22.408508  236246 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0830 20:16:22.410741  236246 out.go:177] 
	W0830 20:16:22.412237  236246 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0830 20:16:22.413588  236246 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-037297 --dry-run --alsologtostderr -v=1 --driver=kvm2 
--- PASS: TestFunctional/parallel/DryRun (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-037297 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-037297 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (135.216726ms)

                                                
                                                
-- stdout --
	* [functional-037297] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17145
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17145-222139/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17145-222139/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0830 20:16:21.407669  236129 out.go:296] Setting OutFile to fd 1 ...
	I0830 20:16:21.407835  236129 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 20:16:21.407845  236129 out.go:309] Setting ErrFile to fd 2...
	I0830 20:16:21.407852  236129 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 20:16:21.408164  236129 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17145-222139/.minikube/bin
	I0830 20:16:21.408689  236129 out.go:303] Setting JSON to false
	I0830 20:16:21.409598  236129 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":7124,"bootTime":1693419458,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0830 20:16:21.409676  236129 start.go:138] virtualization: kvm guest
	I0830 20:16:21.411891  236129 out.go:177] * [functional-037297] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	I0830 20:16:21.413996  236129 out.go:177]   - MINIKUBE_LOCATION=17145
	I0830 20:16:21.415476  236129 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0830 20:16:21.414033  236129 notify.go:220] Checking for updates...
	I0830 20:16:21.418296  236129 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17145-222139/kubeconfig
	I0830 20:16:21.419698  236129 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17145-222139/.minikube
	I0830 20:16:21.421114  236129 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0830 20:16:21.422521  236129 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0830 20:16:21.424050  236129 config.go:182] Loaded profile config "functional-037297": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0830 20:16:21.424420  236129 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0830 20:16:21.424490  236129 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 20:16:21.439738  236129 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42929
	I0830 20:16:21.440173  236129 main.go:141] libmachine: () Calling .GetVersion
	I0830 20:16:21.440833  236129 main.go:141] libmachine: Using API Version  1
	I0830 20:16:21.440872  236129 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 20:16:21.441299  236129 main.go:141] libmachine: () Calling .GetMachineName
	I0830 20:16:21.441518  236129 main.go:141] libmachine: (functional-037297) Calling .DriverName
	I0830 20:16:21.441789  236129 driver.go:373] Setting default libvirt URI to qemu:///system
	I0830 20:16:21.442054  236129 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0830 20:16:21.442090  236129 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 20:16:21.456238  236129 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34173
	I0830 20:16:21.456623  236129 main.go:141] libmachine: () Calling .GetVersion
	I0830 20:16:21.457064  236129 main.go:141] libmachine: Using API Version  1
	I0830 20:16:21.457092  236129 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 20:16:21.457383  236129 main.go:141] libmachine: () Calling .GetMachineName
	I0830 20:16:21.457555  236129 main.go:141] libmachine: (functional-037297) Calling .DriverName
	I0830 20:16:21.491743  236129 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0830 20:16:21.493236  236129 start.go:298] selected driver: kvm2
	I0830 20:16:21.493270  236129 start.go:902] validating driver "kvm2" against &{Name:functional-037297 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.1 ClusterName:functional-037297 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.169 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 C
ertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 20:16:21.493367  236129 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0830 20:16:21.495691  236129 out.go:177] 
	W0830 20:16:21.497045  236129 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0830 20:16:21.498507  236129 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (23.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-037297 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-037297 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-8xqdz" [77b3616a-28a5-44d2-93a5-1388bad0f3f0] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-8xqdz" [77b3616a-28a5-44d2-93a5-1388bad0f3f0] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 23.022922626s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.39.169:32345
functional_test.go:1674: http://192.168.39.169:32345: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-8xqdz

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.169:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.169:32345
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (23.51s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (58.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [e79d41a2-4c72-406d-be95-565babddceb8] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.024707032s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-037297 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-037297 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-037297 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-037297 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-037297 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6cbb242c-3cb2-4374-9d9f-da4df3021868] Pending
helpers_test.go:344: "sp-pod" [6cbb242c-3cb2-4374-9d9f-da4df3021868] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [6cbb242c-3cb2-4374-9d9f-da4df3021868] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 32.013822204s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-037297 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-037297 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-037297 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [af8c83a3-eeb3-4412-b12e-a6c7c8127617] Pending
helpers_test.go:344: "sp-pod" [af8c83a3-eeb3-4412-b12e-a6c7c8127617] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [af8c83a3-eeb3-4412-b12e-a6c7c8127617] Running
2023/08/30 20:16:40 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 17.017966618s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-037297 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (58.11s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 ssh -n functional-037297 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 cp functional-037297:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3559321043/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 ssh -n functional-037297 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (37.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-037297 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-hm6jk" [34b75ea1-4c3f-4934-9157-6ecef4e5362a] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-hm6jk" [34b75ea1-4c3f-4934-9157-6ecef4e5362a] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 30.024672605s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-037297 exec mysql-859648c796-hm6jk -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-037297 exec mysql-859648c796-hm6jk -- mysql -ppassword -e "show databases;": exit status 1 (343.431415ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-037297 exec mysql-859648c796-hm6jk -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-037297 exec mysql-859648c796-hm6jk -- mysql -ppassword -e "show databases;": exit status 1 (252.462731ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-037297 exec mysql-859648c796-hm6jk -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-037297 exec mysql-859648c796-hm6jk -- mysql -ppassword -e "show databases;": exit status 1 (255.937057ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-037297 exec mysql-859648c796-hm6jk -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (37.32s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/229347/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 ssh "sudo cat /etc/test/nested/copy/229347/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/229347.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 ssh "sudo cat /etc/ssl/certs/229347.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/229347.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 ssh "sudo cat /usr/share/ca-certificates/229347.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/2293472.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 ssh "sudo cat /etc/ssl/certs/2293472.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/2293472.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 ssh "sudo cat /usr/share/ca-certificates/2293472.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-037297 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-037297 ssh "sudo systemctl is-active crio": exit status 1 (220.561278ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
functional_test.go:2284: (dbg) Done: out/minikube-linux-amd64 license: (1.269585757s)
--- PASS: TestFunctional/parallel/License (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-037297 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.1
registry.k8s.io/kube-proxy:v1.28.1
registry.k8s.io/kube-controller-manager:v1.28.1
registry.k8s.io/kube-apiserver:v1.28.1
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-037297
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-037297
docker.io/kubernetesui/metrics-scraper:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-037297 image ls --format short --alsologtostderr:
I0830 20:16:28.993252  236558 out.go:296] Setting OutFile to fd 1 ...
I0830 20:16:28.993426  236558 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0830 20:16:28.993436  236558 out.go:309] Setting ErrFile to fd 2...
I0830 20:16:28.993443  236558 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0830 20:16:28.993665  236558 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17145-222139/.minikube/bin
I0830 20:16:28.994330  236558 config.go:182] Loaded profile config "functional-037297": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0830 20:16:28.994422  236558 config.go:182] Loaded profile config "functional-037297": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0830 20:16:28.994728  236558 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0830 20:16:28.994780  236558 main.go:141] libmachine: Launching plugin server for driver kvm2
I0830 20:16:29.009455  236558 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39609
I0830 20:16:29.009880  236558 main.go:141] libmachine: () Calling .GetVersion
I0830 20:16:29.010489  236558 main.go:141] libmachine: Using API Version  1
I0830 20:16:29.010515  236558 main.go:141] libmachine: () Calling .SetConfigRaw
I0830 20:16:29.010856  236558 main.go:141] libmachine: () Calling .GetMachineName
I0830 20:16:29.011054  236558 main.go:141] libmachine: (functional-037297) Calling .GetState
I0830 20:16:29.012612  236558 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0830 20:16:29.012656  236558 main.go:141] libmachine: Launching plugin server for driver kvm2
I0830 20:16:29.030553  236558 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44381
I0830 20:16:29.031061  236558 main.go:141] libmachine: () Calling .GetVersion
I0830 20:16:29.031704  236558 main.go:141] libmachine: Using API Version  1
I0830 20:16:29.031731  236558 main.go:141] libmachine: () Calling .SetConfigRaw
I0830 20:16:29.032049  236558 main.go:141] libmachine: () Calling .GetMachineName
I0830 20:16:29.032256  236558 main.go:141] libmachine: (functional-037297) Calling .DriverName
I0830 20:16:29.032463  236558 ssh_runner.go:195] Run: systemctl --version
I0830 20:16:29.032494  236558 main.go:141] libmachine: (functional-037297) Calling .GetSSHHostname
I0830 20:16:29.035246  236558 main.go:141] libmachine: (functional-037297) DBG | domain functional-037297 has defined MAC address 52:54:00:6c:c9:02 in network mk-functional-037297
I0830 20:16:29.035787  236558 main.go:141] libmachine: (functional-037297) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:c9:02", ip: ""} in network mk-functional-037297: {Iface:virbr1 ExpiryTime:2023-08-30 21:13:13 +0000 UTC Type:0 Mac:52:54:00:6c:c9:02 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:functional-037297 Clientid:01:52:54:00:6c:c9:02}
I0830 20:16:29.035819  236558 main.go:141] libmachine: (functional-037297) DBG | domain functional-037297 has defined IP address 192.168.39.169 and MAC address 52:54:00:6c:c9:02 in network mk-functional-037297
I0830 20:16:29.035965  236558 main.go:141] libmachine: (functional-037297) Calling .GetSSHPort
I0830 20:16:29.036157  236558 main.go:141] libmachine: (functional-037297) Calling .GetSSHKeyPath
I0830 20:16:29.036329  236558 main.go:141] libmachine: (functional-037297) Calling .GetSSHUsername
I0830 20:16:29.036461  236558 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/machines/functional-037297/id_rsa Username:docker}
I0830 20:16:29.130141  236558 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0830 20:16:29.173614  236558 main.go:141] libmachine: Making call to close driver server
I0830 20:16:29.173626  236558 main.go:141] libmachine: (functional-037297) Calling .Close
I0830 20:16:29.173946  236558 main.go:141] libmachine: Successfully made call to close driver server
I0830 20:16:29.173970  236558 main.go:141] libmachine: Making call to close connection to plugin binary
I0830 20:16:29.173984  236558 main.go:141] libmachine: Making call to close driver server
I0830 20:16:29.173992  236558 main.go:141] libmachine: (functional-037297) Calling .Close
I0830 20:16:29.173991  236558 main.go:141] libmachine: (functional-037297) DBG | Closing plugin on server side
I0830 20:16:29.174231  236558 main.go:141] libmachine: Successfully made call to close driver server
I0830 20:16:29.174252  236558 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-037297 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| docker.io/library/minikube-local-cache-test | functional-037297 | 332a7ca418459 | 30B    |
| registry.k8s.io/kube-controller-manager     | v1.28.1           | 821b3dfea27be | 122MB  |
| docker.io/library/nginx                     | latest            | eea7b3dcba7ee | 187MB  |
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
| gcr.io/k8s-minikube/busybox                 | latest            | beae173ccac6a | 1.24MB |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| gcr.io/google-containers/addon-resizer      | functional-037297 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/kube-apiserver              | v1.28.1           | 5c801295c21d0 | 126MB  |
| registry.k8s.io/kube-scheduler              | v1.28.1           | b462ce0c8b1ff | 60.1MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| registry.k8s.io/kube-proxy                  | v1.28.1           | 6cdbabde3874e | 73.1MB |
| docker.io/library/mysql                     | 5.7               | 92034fe9a41f4 | 581MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-037297 image ls --format table --alsologtostderr:
I0830 20:16:31.859398  236828 out.go:296] Setting OutFile to fd 1 ...
I0830 20:16:31.859554  236828 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0830 20:16:31.859569  236828 out.go:309] Setting ErrFile to fd 2...
I0830 20:16:31.859576  236828 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0830 20:16:31.859911  236828 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17145-222139/.minikube/bin
I0830 20:16:31.860796  236828 config.go:182] Loaded profile config "functional-037297": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0830 20:16:31.860958  236828 config.go:182] Loaded profile config "functional-037297": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0830 20:16:31.861507  236828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0830 20:16:31.861575  236828 main.go:141] libmachine: Launching plugin server for driver kvm2
I0830 20:16:31.876300  236828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36701
I0830 20:16:31.876816  236828 main.go:141] libmachine: () Calling .GetVersion
I0830 20:16:31.877459  236828 main.go:141] libmachine: Using API Version  1
I0830 20:16:31.877483  236828 main.go:141] libmachine: () Calling .SetConfigRaw
I0830 20:16:31.877847  236828 main.go:141] libmachine: () Calling .GetMachineName
I0830 20:16:31.878064  236828 main.go:141] libmachine: (functional-037297) Calling .GetState
I0830 20:16:31.880235  236828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0830 20:16:31.880303  236828 main.go:141] libmachine: Launching plugin server for driver kvm2
I0830 20:16:31.899760  236828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38897
I0830 20:16:31.900212  236828 main.go:141] libmachine: () Calling .GetVersion
I0830 20:16:31.900819  236828 main.go:141] libmachine: Using API Version  1
I0830 20:16:31.900843  236828 main.go:141] libmachine: () Calling .SetConfigRaw
I0830 20:16:31.901188  236828 main.go:141] libmachine: () Calling .GetMachineName
I0830 20:16:31.901355  236828 main.go:141] libmachine: (functional-037297) Calling .DriverName
I0830 20:16:31.901619  236828 ssh_runner.go:195] Run: systemctl --version
I0830 20:16:31.901651  236828 main.go:141] libmachine: (functional-037297) Calling .GetSSHHostname
I0830 20:16:31.904510  236828 main.go:141] libmachine: (functional-037297) DBG | domain functional-037297 has defined MAC address 52:54:00:6c:c9:02 in network mk-functional-037297
I0830 20:16:31.904928  236828 main.go:141] libmachine: (functional-037297) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:c9:02", ip: ""} in network mk-functional-037297: {Iface:virbr1 ExpiryTime:2023-08-30 21:13:13 +0000 UTC Type:0 Mac:52:54:00:6c:c9:02 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:functional-037297 Clientid:01:52:54:00:6c:c9:02}
I0830 20:16:31.904973  236828 main.go:141] libmachine: (functional-037297) DBG | domain functional-037297 has defined IP address 192.168.39.169 and MAC address 52:54:00:6c:c9:02 in network mk-functional-037297
I0830 20:16:31.905030  236828 main.go:141] libmachine: (functional-037297) Calling .GetSSHPort
I0830 20:16:31.905204  236828 main.go:141] libmachine: (functional-037297) Calling .GetSSHKeyPath
I0830 20:16:31.905332  236828 main.go:141] libmachine: (functional-037297) Calling .GetSSHUsername
I0830 20:16:31.905500  236828 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/machines/functional-037297/id_rsa Username:docker}
I0830 20:16:31.998520  236828 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0830 20:16:32.030913  236828 main.go:141] libmachine: Making call to close driver server
I0830 20:16:32.030937  236828 main.go:141] libmachine: (functional-037297) Calling .Close
I0830 20:16:32.031305  236828 main.go:141] libmachine: (functional-037297) DBG | Closing plugin on server side
I0830 20:16:32.031305  236828 main.go:141] libmachine: Successfully made call to close driver server
I0830 20:16:32.031341  236828 main.go:141] libmachine: Making call to close connection to plugin binary
I0830 20:16:32.031356  236828 main.go:141] libmachine: Making call to close driver server
I0830 20:16:32.031369  236828 main.go:141] libmachine: (functional-037297) Calling .Close
I0830 20:16:32.031611  236828 main.go:141] libmachine: (functional-037297) DBG | Closing plugin on server side
I0830 20:16:32.031646  236828 main.go:141] libmachine: Successfully made call to close driver server
I0830 20:16:32.031659  236828 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-037297 image ls --format json --alsologtostderr:
[{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"294000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-037297"],"size":"32900000"},{"id":"5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.1"],"size":"126000000"},{"id":"6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.1"],"size":"73100000"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"115053965e86b2df4d78af78d7951b8644839d
20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"332a7ca418459751b9ca6d47cc145476209af3c551916053509aed0602fd9829","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-037297"],"size":"30"},{"id":"821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.1"],"size":"122000000"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1240000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","r
epoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.1"],"size":"60100000"},{"id":"eea7b3dcba7ee47c0d16a60cc85d2b977d166be3960541991f3e6294d795ed24","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"92034fe9a41f4344b97f3fc88a8796248e2cfa9b934be58379f3dbc150d07d9d","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"581000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000
"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-037297 image ls --format json --alsologtostderr:
I0830 20:16:31.608283  236805 out.go:296] Setting OutFile to fd 1 ...
I0830 20:16:31.608386  236805 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0830 20:16:31.608395  236805 out.go:309] Setting ErrFile to fd 2...
I0830 20:16:31.608399  236805 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0830 20:16:31.608589  236805 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17145-222139/.minikube/bin
I0830 20:16:31.609131  236805 config.go:182] Loaded profile config "functional-037297": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0830 20:16:31.609234  236805 config.go:182] Loaded profile config "functional-037297": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0830 20:16:31.609579  236805 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0830 20:16:31.609636  236805 main.go:141] libmachine: Launching plugin server for driver kvm2
I0830 20:16:31.624185  236805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46227
I0830 20:16:31.624631  236805 main.go:141] libmachine: () Calling .GetVersion
I0830 20:16:31.625231  236805 main.go:141] libmachine: Using API Version  1
I0830 20:16:31.625259  236805 main.go:141] libmachine: () Calling .SetConfigRaw
I0830 20:16:31.625644  236805 main.go:141] libmachine: () Calling .GetMachineName
I0830 20:16:31.625850  236805 main.go:141] libmachine: (functional-037297) Calling .GetState
I0830 20:16:31.627647  236805 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0830 20:16:31.627688  236805 main.go:141] libmachine: Launching plugin server for driver kvm2
I0830 20:16:31.641777  236805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40769
I0830 20:16:31.642238  236805 main.go:141] libmachine: () Calling .GetVersion
I0830 20:16:31.642720  236805 main.go:141] libmachine: Using API Version  1
I0830 20:16:31.642742  236805 main.go:141] libmachine: () Calling .SetConfigRaw
I0830 20:16:31.643070  236805 main.go:141] libmachine: () Calling .GetMachineName
I0830 20:16:31.643257  236805 main.go:141] libmachine: (functional-037297) Calling .DriverName
I0830 20:16:31.643471  236805 ssh_runner.go:195] Run: systemctl --version
I0830 20:16:31.643504  236805 main.go:141] libmachine: (functional-037297) Calling .GetSSHHostname
I0830 20:16:31.646198  236805 main.go:141] libmachine: (functional-037297) DBG | domain functional-037297 has defined MAC address 52:54:00:6c:c9:02 in network mk-functional-037297
I0830 20:16:31.646643  236805 main.go:141] libmachine: (functional-037297) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:c9:02", ip: ""} in network mk-functional-037297: {Iface:virbr1 ExpiryTime:2023-08-30 21:13:13 +0000 UTC Type:0 Mac:52:54:00:6c:c9:02 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:functional-037297 Clientid:01:52:54:00:6c:c9:02}
I0830 20:16:31.646684  236805 main.go:141] libmachine: (functional-037297) DBG | domain functional-037297 has defined IP address 192.168.39.169 and MAC address 52:54:00:6c:c9:02 in network mk-functional-037297
I0830 20:16:31.646879  236805 main.go:141] libmachine: (functional-037297) Calling .GetSSHPort
I0830 20:16:31.647064  236805 main.go:141] libmachine: (functional-037297) Calling .GetSSHKeyPath
I0830 20:16:31.647252  236805 main.go:141] libmachine: (functional-037297) Calling .GetSSHUsername
I0830 20:16:31.647431  236805 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/machines/functional-037297/id_rsa Username:docker}
I0830 20:16:31.737577  236805 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0830 20:16:31.796012  236805 main.go:141] libmachine: Making call to close driver server
I0830 20:16:31.796029  236805 main.go:141] libmachine: (functional-037297) Calling .Close
I0830 20:16:31.796350  236805 main.go:141] libmachine: Successfully made call to close driver server
I0830 20:16:31.796385  236805 main.go:141] libmachine: Making call to close connection to plugin binary
I0830 20:16:31.796399  236805 main.go:141] libmachine: Making call to close driver server
I0830 20:16:31.796414  236805 main.go:141] libmachine: (functional-037297) Calling .Close
I0830 20:16:31.796688  236805 main.go:141] libmachine: Successfully made call to close driver server
I0830 20:16:31.796706  236805 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-037297 image ls --format yaml --alsologtostderr:
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 92034fe9a41f4344b97f3fc88a8796248e2cfa9b934be58379f3dbc150d07d9d
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "581000000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-037297
size: "32900000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 332a7ca418459751b9ca6d47cc145476209af3c551916053509aed0602fd9829
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-037297
size: "30"
- id: 821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.1
size: "122000000"
- id: b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.1
size: "60100000"
- id: eea7b3dcba7ee47c0d16a60cc85d2b977d166be3960541991f3e6294d795ed24
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.1
size: "126000000"
- id: 6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.1
size: "73100000"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-037297 image ls --format yaml --alsologtostderr:
I0830 20:16:29.229486  236583 out.go:296] Setting OutFile to fd 1 ...
I0830 20:16:29.229672  236583 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0830 20:16:29.229685  236583 out.go:309] Setting ErrFile to fd 2...
I0830 20:16:29.229692  236583 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0830 20:16:29.229986  236583 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17145-222139/.minikube/bin
I0830 20:16:29.230825  236583 config.go:182] Loaded profile config "functional-037297": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0830 20:16:29.230981  236583 config.go:182] Loaded profile config "functional-037297": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0830 20:16:29.231523  236583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0830 20:16:29.231602  236583 main.go:141] libmachine: Launching plugin server for driver kvm2
I0830 20:16:29.246234  236583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39163
I0830 20:16:29.246678  236583 main.go:141] libmachine: () Calling .GetVersion
I0830 20:16:29.247545  236583 main.go:141] libmachine: Using API Version  1
I0830 20:16:29.247577  236583 main.go:141] libmachine: () Calling .SetConfigRaw
I0830 20:16:29.248028  236583 main.go:141] libmachine: () Calling .GetMachineName
I0830 20:16:29.248228  236583 main.go:141] libmachine: (functional-037297) Calling .GetState
I0830 20:16:29.250308  236583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0830 20:16:29.250379  236583 main.go:141] libmachine: Launching plugin server for driver kvm2
I0830 20:16:29.265197  236583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35003
I0830 20:16:29.265642  236583 main.go:141] libmachine: () Calling .GetVersion
I0830 20:16:29.266148  236583 main.go:141] libmachine: Using API Version  1
I0830 20:16:29.266164  236583 main.go:141] libmachine: () Calling .SetConfigRaw
I0830 20:16:29.266513  236583 main.go:141] libmachine: () Calling .GetMachineName
I0830 20:16:29.266719  236583 main.go:141] libmachine: (functional-037297) Calling .DriverName
I0830 20:16:29.266927  236583 ssh_runner.go:195] Run: systemctl --version
I0830 20:16:29.266952  236583 main.go:141] libmachine: (functional-037297) Calling .GetSSHHostname
I0830 20:16:29.269604  236583 main.go:141] libmachine: (functional-037297) DBG | domain functional-037297 has defined MAC address 52:54:00:6c:c9:02 in network mk-functional-037297
I0830 20:16:29.270002  236583 main.go:141] libmachine: (functional-037297) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:c9:02", ip: ""} in network mk-functional-037297: {Iface:virbr1 ExpiryTime:2023-08-30 21:13:13 +0000 UTC Type:0 Mac:52:54:00:6c:c9:02 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:functional-037297 Clientid:01:52:54:00:6c:c9:02}
I0830 20:16:29.270039  236583 main.go:141] libmachine: (functional-037297) DBG | domain functional-037297 has defined IP address 192.168.39.169 and MAC address 52:54:00:6c:c9:02 in network mk-functional-037297
I0830 20:16:29.270173  236583 main.go:141] libmachine: (functional-037297) Calling .GetSSHPort
I0830 20:16:29.270360  236583 main.go:141] libmachine: (functional-037297) Calling .GetSSHKeyPath
I0830 20:16:29.270516  236583 main.go:141] libmachine: (functional-037297) Calling .GetSSHUsername
I0830 20:16:29.270671  236583 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/machines/functional-037297/id_rsa Username:docker}
I0830 20:16:29.372042  236583 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0830 20:16:29.413557  236583 main.go:141] libmachine: Making call to close driver server
I0830 20:16:29.413571  236583 main.go:141] libmachine: (functional-037297) Calling .Close
I0830 20:16:29.413817  236583 main.go:141] libmachine: Successfully made call to close driver server
I0830 20:16:29.413849  236583 main.go:141] libmachine: Making call to close connection to plugin binary
I0830 20:16:29.413934  236583 main.go:141] libmachine: (functional-037297) DBG | Closing plugin on server side
I0830 20:16:29.413997  236583 main.go:141] libmachine: Making call to close driver server
I0830 20:16:29.414013  236583 main.go:141] libmachine: (functional-037297) Calling .Close
I0830 20:16:29.414230  236583 main.go:141] libmachine: Successfully made call to close driver server
I0830 20:16:29.414262  236583 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-037297 ssh pgrep buildkitd: exit status 1 (202.673574ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 image build -t localhost/my-image:functional-037297 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-037297 image build -t localhost/my-image:functional-037297 testdata/build --alsologtostderr: (3.64419857s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-037297 image build -t localhost/my-image:functional-037297 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in d0962a5fa3cd
Removing intermediate container d0962a5fa3cd
---> 9806345c839d
Step 3/3 : ADD content.txt /
---> eb08f82cc404
Successfully built eb08f82cc404
Successfully tagged localhost/my-image:functional-037297
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-037297 image build -t localhost/my-image:functional-037297 testdata/build --alsologtostderr:
I0830 20:16:29.667394  236665 out.go:296] Setting OutFile to fd 1 ...
I0830 20:16:29.667562  236665 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0830 20:16:29.667574  236665 out.go:309] Setting ErrFile to fd 2...
I0830 20:16:29.667580  236665 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0830 20:16:29.667829  236665 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17145-222139/.minikube/bin
I0830 20:16:29.668453  236665 config.go:182] Loaded profile config "functional-037297": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0830 20:16:29.669132  236665 config.go:182] Loaded profile config "functional-037297": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0830 20:16:29.669613  236665 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0830 20:16:29.669688  236665 main.go:141] libmachine: Launching plugin server for driver kvm2
I0830 20:16:29.684033  236665 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38551
I0830 20:16:29.684511  236665 main.go:141] libmachine: () Calling .GetVersion
I0830 20:16:29.685118  236665 main.go:141] libmachine: Using API Version  1
I0830 20:16:29.685145  236665 main.go:141] libmachine: () Calling .SetConfigRaw
I0830 20:16:29.685454  236665 main.go:141] libmachine: () Calling .GetMachineName
I0830 20:16:29.685629  236665 main.go:141] libmachine: (functional-037297) Calling .GetState
I0830 20:16:29.687404  236665 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0830 20:16:29.687445  236665 main.go:141] libmachine: Launching plugin server for driver kvm2
I0830 20:16:29.701587  236665 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39209
I0830 20:16:29.701991  236665 main.go:141] libmachine: () Calling .GetVersion
I0830 20:16:29.702491  236665 main.go:141] libmachine: Using API Version  1
I0830 20:16:29.702509  236665 main.go:141] libmachine: () Calling .SetConfigRaw
I0830 20:16:29.702822  236665 main.go:141] libmachine: () Calling .GetMachineName
I0830 20:16:29.703041  236665 main.go:141] libmachine: (functional-037297) Calling .DriverName
I0830 20:16:29.703326  236665 ssh_runner.go:195] Run: systemctl --version
I0830 20:16:29.703372  236665 main.go:141] libmachine: (functional-037297) Calling .GetSSHHostname
I0830 20:16:29.705870  236665 main.go:141] libmachine: (functional-037297) DBG | domain functional-037297 has defined MAC address 52:54:00:6c:c9:02 in network mk-functional-037297
I0830 20:16:29.706231  236665 main.go:141] libmachine: (functional-037297) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:c9:02", ip: ""} in network mk-functional-037297: {Iface:virbr1 ExpiryTime:2023-08-30 21:13:13 +0000 UTC Type:0 Mac:52:54:00:6c:c9:02 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:functional-037297 Clientid:01:52:54:00:6c:c9:02}
I0830 20:16:29.706261  236665 main.go:141] libmachine: (functional-037297) DBG | domain functional-037297 has defined IP address 192.168.39.169 and MAC address 52:54:00:6c:c9:02 in network mk-functional-037297
I0830 20:16:29.706432  236665 main.go:141] libmachine: (functional-037297) Calling .GetSSHPort
I0830 20:16:29.706611  236665 main.go:141] libmachine: (functional-037297) Calling .GetSSHKeyPath
I0830 20:16:29.706748  236665 main.go:141] libmachine: (functional-037297) Calling .GetSSHUsername
I0830 20:16:29.706853  236665 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/machines/functional-037297/id_rsa Username:docker}
I0830 20:16:29.797769  236665 build_images.go:151] Building image from path: /tmp/build.103142858.tar
I0830 20:16:29.797849  236665 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0830 20:16:29.810231  236665 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.103142858.tar
I0830 20:16:29.815042  236665 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.103142858.tar: stat -c "%s %y" /var/lib/minikube/build/build.103142858.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.103142858.tar': No such file or directory
I0830 20:16:29.815079  236665 ssh_runner.go:362] scp /tmp/build.103142858.tar --> /var/lib/minikube/build/build.103142858.tar (3072 bytes)
I0830 20:16:29.850486  236665 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.103142858
I0830 20:16:29.861368  236665 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.103142858 -xf /var/lib/minikube/build/build.103142858.tar
I0830 20:16:29.880754  236665 docker.go:339] Building image: /var/lib/minikube/build/build.103142858
I0830 20:16:29.880827  236665 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-037297 /var/lib/minikube/build/build.103142858
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0830 20:16:33.238134  236665 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-037297 /var/lib/minikube/build/build.103142858: (3.357252741s)
I0830 20:16:33.238250  236665 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.103142858
I0830 20:16:33.250728  236665 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.103142858.tar
I0830 20:16:33.261307  236665 build_images.go:207] Built localhost/my-image:functional-037297 from /tmp/build.103142858.tar
I0830 20:16:33.261343  236665 build_images.go:123] succeeded building to: functional-037297
I0830 20:16:33.261348  236665 build_images.go:124] failed building to: 
I0830 20:16:33.261380  236665 main.go:141] libmachine: Making call to close driver server
I0830 20:16:33.261403  236665 main.go:141] libmachine: (functional-037297) Calling .Close
I0830 20:16:33.261681  236665 main.go:141] libmachine: Successfully made call to close driver server
I0830 20:16:33.261711  236665 main.go:141] libmachine: Making call to close connection to plugin binary
I0830 20:16:33.261722  236665 main.go:141] libmachine: Making call to close driver server
I0830 20:16:33.261730  236665 main.go:141] libmachine: (functional-037297) Calling .Close
I0830 20:16:33.261754  236665 main.go:141] libmachine: (functional-037297) DBG | Closing plugin on server side
I0830 20:16:33.261980  236665 main.go:141] libmachine: Successfully made call to close driver server
I0830 20:16:33.262001  236665 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-037297 docker-env) && out/minikube-linux-amd64 status -p functional-037297"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-037297 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.359848657s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-037297
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.38s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 image load --daemon gcr.io/google-containers/addon-resizer:functional-037297 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-037297 image load --daemon gcr.io/google-containers/addon-resizer:functional-037297 --alsologtostderr: (5.061530848s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.34s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (27.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-037297 /tmp/TestFunctionalparallelMountCmdany-port898490445/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1693426544681043622" to /tmp/TestFunctionalparallelMountCmdany-port898490445/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1693426544681043622" to /tmp/TestFunctionalparallelMountCmdany-port898490445/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1693426544681043622" to /tmp/TestFunctionalparallelMountCmdany-port898490445/001/test-1693426544681043622
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-037297 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (213.32219ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 30 20:15 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 30 20:15 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 30 20:15 test-1693426544681043622
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 ssh cat /mount-9p/test-1693426544681043622
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-037297 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [bcc56f2f-823d-4d32-9278-507d90cb866b] Pending
helpers_test.go:344: "busybox-mount" [bcc56f2f-823d-4d32-9278-507d90cb866b] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [bcc56f2f-823d-4d32-9278-507d90cb866b] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [bcc56f2f-823d-4d32-9278-507d90cb866b] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 25.01865484s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-037297 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-037297 /tmp/TestFunctionalparallelMountCmdany-port898490445/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (27.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 image load --daemon gcr.io/google-containers/addon-resizer:functional-037297 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-037297 image load --daemon gcr.io/google-containers/addon-resizer:functional-037297 --alsologtostderr: (2.320637834s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.315263769s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-037297
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 image load --daemon gcr.io/google-containers/addon-resizer:functional-037297 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-037297 image load --daemon gcr.io/google-containers/addon-resizer:functional-037297 --alsologtostderr: (3.989109279s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 image save gcr.io/google-containers/addon-resizer:functional-037297 /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-037297 image save gcr.io/google-containers/addon-resizer:functional-037297 /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr: (1.614270709s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 image rm gcr.io/google-containers/addon-resizer:functional-037297 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 image load /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-037297 image load /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr: (1.400091976s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-037297
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 image save --daemon gcr.io/google-containers/addon-resizer:functional-037297 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-037297 image save --daemon gcr.io/google-containers/addon-resizer:functional-037297 --alsologtostderr: (1.872537831s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-037297
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-037297 /tmp/TestFunctionalparallelMountCmdspecific-port2097310545/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-037297 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (199.188328ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-037297 /tmp/TestFunctionalparallelMountCmdspecific-port2097310545/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-037297 ssh "sudo umount -f /mount-9p": exit status 1 (209.885048ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-037297 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-037297 /tmp/TestFunctionalparallelMountCmdspecific-port2097310545/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-037297 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1487536295/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-037297 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1487536295/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-037297 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1487536295/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-037297 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-037297 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1487536295/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-037297 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1487536295/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-037297 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1487536295/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (13.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-037297 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-037297 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-gwpfp" [57da41ef-5c1b-427f-9b10-2f04d6da5c5b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-gwpfp" [57da41ef-5c1b-427f-9b10-2f04d6da5c5b] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 13.011268439s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (13.20s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E0830 20:16:20.588060  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/addons-120922/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "270.296788ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "43.156291ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "210.861291ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "42.237737ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 service list
functional_test.go:1458: (dbg) Done: out/minikube-linux-amd64 -p functional-037297 service list: (1.255400153s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 service list -o json
functional_test.go:1488: (dbg) Done: out/minikube-linux-amd64 -p functional-037297 service list -o json: (1.279473843s)
functional_test.go:1493: Took "1.279584143s" to run "out/minikube-linux-amd64 -p functional-037297 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.39.169:31217
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-037297 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.39.169:31217
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.32s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-037297
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-037297
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-037297
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestGvisorAddon (319.39s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-450430 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
E0830 20:44:23.286711  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/ingress-addon-legacy-290437/client.crt: no such file or directory
gvisor_addon_test.go:52: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-450430 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (2m2.729388782s)
gvisor_addon_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-450430 cache add gcr.io/k8s-minikube/gvisor-addon:2
gvisor_addon_test.go:58: (dbg) Done: out/minikube-linux-amd64 -p gvisor-450430 cache add gcr.io/k8s-minikube/gvisor-addon:2: (22.167111498s)
gvisor_addon_test.go:63: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-450430 addons enable gvisor
gvisor_addon_test.go:63: (dbg) Done: out/minikube-linux-amd64 -p gvisor-450430 addons enable gvisor: (4.268741063s)
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [371830d0-959b-4510-9f9e-ddad8994b023] Running
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 5.025743445s
gvisor_addon_test.go:73: (dbg) Run:  kubectl --context gvisor-450430 replace --force -f testdata/nginx-gvisor.yaml
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [5ad0c4d3-1b18-4e41-8464-36d98ea59a5d] Pending
helpers_test.go:344: "nginx-gvisor" [5ad0c4d3-1b18-4e41-8464-36d98ea59a5d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-gvisor" [5ad0c4d3-1b18-4e41-8464-36d98ea59a5d] Running
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 17.243774794s
gvisor_addon_test.go:83: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-450430
gvisor_addon_test.go:83: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-450430: (1m31.912683159s)
gvisor_addon_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-450430 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-450430 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (44.380673474s)
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [371830d0-959b-4510-9f9e-ddad8994b023] Running / Ready:ContainersNotReady (containers with unready status: [gvisor]) / ContainersReady:ContainersNotReady (containers with unready status: [gvisor])
helpers_test.go:344: "gvisor" [371830d0-959b-4510-9f9e-ddad8994b023] Running
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 5.034016438s
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [5ad0c4d3-1b18-4e41-8464-36d98ea59a5d] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E0830 20:49:23.287345  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/ingress-addon-legacy-290437/client.crt: no such file or directory
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.095980821s
helpers_test.go:175: Cleaning up "gvisor-450430" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-450430
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p gvisor-450430: (1.192003124s)
--- PASS: TestGvisorAddon (319.39s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (50.61s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-844133 --driver=kvm2 
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-844133 --driver=kvm2 : (50.605809132s)
--- PASS: TestImageBuild/serial/Setup (50.61s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.32s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-844133
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-844133: (2.31517949s)
--- PASS: TestImageBuild/serial/NormalBuild (2.32s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.18s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-844133
image_test.go:99: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-844133: (1.179329397s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.18s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.36s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-844133
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.36s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.29s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-844133
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.29s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (89.85s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-290437 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 
E0830 20:17:42.508716  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/addons-120922/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-290437 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 : (1m29.854235701s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (89.85s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (13.4s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-290437 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-290437 addons enable ingress --alsologtostderr -v=5: (13.3952889s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (13.40s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.52s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-290437 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.52s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (34.17s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-290437 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-290437 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (13.015528521s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-290437 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-290437 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [f3c8f92a-4cab-4eb4-a1e3-2a673b6649fc] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [f3c8f92a-4cab-4eb4-a1e3-2a673b6649fc] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.018604598s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-290437 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-290437 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-290437 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.39.83
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-290437 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-290437 addons disable ingress-dns --alsologtostderr -v=1: (2.411770998s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-290437 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-290437 addons disable ingress --alsologtostderr -v=1: (7.509603453s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (34.17s)

                                                
                                    
x
+
TestJSONOutput/start/Command (67.67s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-155929 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 
E0830 20:19:58.659458  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/addons-120922/client.crt: no such file or directory
E0830 20:20:26.350968  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/addons-120922/client.crt: no such file or directory
E0830 20:20:43.393282  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/functional-037297/client.crt: no such file or directory
E0830 20:20:43.398551  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/functional-037297/client.crt: no such file or directory
E0830 20:20:43.408803  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/functional-037297/client.crt: no such file or directory
E0830 20:20:43.429071  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/functional-037297/client.crt: no such file or directory
E0830 20:20:43.469330  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/functional-037297/client.crt: no such file or directory
E0830 20:20:43.549651  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/functional-037297/client.crt: no such file or directory
E0830 20:20:43.710043  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/functional-037297/client.crt: no such file or directory
E0830 20:20:44.030768  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/functional-037297/client.crt: no such file or directory
E0830 20:20:44.671727  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/functional-037297/client.crt: no such file or directory
E0830 20:20:45.952265  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/functional-037297/client.crt: no such file or directory
E0830 20:20:48.512692  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/functional-037297/client.crt: no such file or directory
E0830 20:20:53.633439  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/functional-037297/client.crt: no such file or directory
E0830 20:21:03.873949  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/functional-037297/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-155929 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 : (1m7.672039326s)
--- PASS: TestJSONOutput/start/Command (67.67s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.57s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-155929 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.57s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.52s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-155929 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.52s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (13.11s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-155929 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-155929 --output=json --user=testUser: (13.107032904s)
--- PASS: TestJSONOutput/stop/Command (13.11s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-615795 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-615795 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (63.700827ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"071a7eeb-53ed-40d3-819b-bf2b1e7211c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-615795] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"bfee3387-7efb-487d-a754-6ae2ba97bb5d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17145"}}
	{"specversion":"1.0","id":"4117822b-fe3a-4f22-856a-4a8671f1cc33","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ee296cc2-6c70-44cc-ab97-356c4303028b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17145-222139/kubeconfig"}}
	{"specversion":"1.0","id":"c743871f-00d2-46c7-a4b4-a13c6d8255da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17145-222139/.minikube"}}
	{"specversion":"1.0","id":"c5b3c9ef-ebd4-42b9-982e-11bb6da0028e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"044902b3-1902-4be2-ae42-d0ddf19ead9d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"92ad3a71-3114-4f4a-ba40-b0256f85c784","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-615795" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-615795
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (103.66s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-153300 --driver=kvm2 
E0830 20:21:24.355038  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/functional-037297/client.crt: no such file or directory
E0830 20:22:05.316078  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/functional-037297/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-153300 --driver=kvm2 : (52.103960759s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-156005 --driver=kvm2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-156005 --driver=kvm2 : (48.777634713s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-153300
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-156005
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-156005" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-156005
helpers_test.go:175: Cleaning up "first-153300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-153300
--- PASS: TestMinikubeProfile (103.66s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (31.8s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-101616 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 
E0830 20:23:27.238892  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/functional-037297/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-101616 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 : (30.802678378s)
--- PASS: TestMountStart/serial/StartWithMountFirst (31.80s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-101616 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-101616 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (29.67s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-120702 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-120702 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 : (28.673299978s)
--- PASS: TestMountStart/serial/StartWithMountSecond (29.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-120702 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-120702 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.89s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-101616 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-120702 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-120702 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.42s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-120702
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-120702: (2.419372643s)
--- PASS: TestMountStart/serial/Stop (2.42s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (26.19s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-120702
E0830 20:24:23.287479  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/ingress-addon-legacy-290437/client.crt: no such file or directory
E0830 20:24:23.292768  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/ingress-addon-legacy-290437/client.crt: no such file or directory
E0830 20:24:23.303036  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/ingress-addon-legacy-290437/client.crt: no such file or directory
E0830 20:24:23.323326  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/ingress-addon-legacy-290437/client.crt: no such file or directory
E0830 20:24:23.363730  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/ingress-addon-legacy-290437/client.crt: no such file or directory
E0830 20:24:23.444188  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/ingress-addon-legacy-290437/client.crt: no such file or directory
E0830 20:24:23.604627  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/ingress-addon-legacy-290437/client.crt: no such file or directory
E0830 20:24:23.925300  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/ingress-addon-legacy-290437/client.crt: no such file or directory
E0830 20:24:24.566279  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/ingress-addon-legacy-290437/client.crt: no such file or directory
E0830 20:24:25.846871  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/ingress-addon-legacy-290437/client.crt: no such file or directory
E0830 20:24:28.407430  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/ingress-addon-legacy-290437/client.crt: no such file or directory
E0830 20:24:33.528515  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/ingress-addon-legacy-290437/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-120702: (25.194283968s)
--- PASS: TestMountStart/serial/RestartStopped (26.19s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-120702 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-120702 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (144.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-944570 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 
E0830 20:24:43.768909  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/ingress-addon-legacy-290437/client.crt: no such file or directory
E0830 20:24:58.661574  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/addons-120922/client.crt: no such file or directory
E0830 20:25:04.249097  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/ingress-addon-legacy-290437/client.crt: no such file or directory
E0830 20:25:43.393159  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/functional-037297/client.crt: no such file or directory
E0830 20:25:45.209440  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/ingress-addon-legacy-290437/client.crt: no such file or directory
E0830 20:26:11.080620  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/functional-037297/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-944570 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 : (2m24.318327275s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-944570 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (144.74s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-944570 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-944570 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-944570 -- rollout status deployment/busybox: (3.41507559s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-944570 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-944570 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-944570 -- exec busybox-5bc68d56bd-fhrtd -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-944570 -- exec busybox-5bc68d56bd-n5m7r -- nslookup kubernetes.io
E0830 20:27:07.130428  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/ingress-addon-legacy-290437/client.crt: no such file or directory
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-944570 -- exec busybox-5bc68d56bd-fhrtd -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-944570 -- exec busybox-5bc68d56bd-n5m7r -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-944570 -- exec busybox-5bc68d56bd-fhrtd -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-944570 -- exec busybox-5bc68d56bd-n5m7r -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.13s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-944570 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-944570 -- exec busybox-5bc68d56bd-fhrtd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-944570 -- exec busybox-5bc68d56bd-fhrtd -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-944570 -- exec busybox-5bc68d56bd-n5m7r -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-944570 -- exec busybox-5bc68d56bd-n5m7r -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.87s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (46.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-944570 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-944570 -v 3 --alsologtostderr: (45.497848334s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-944570 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (46.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-944570 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-944570 cp testdata/cp-test.txt multinode-944570:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-944570 ssh -n multinode-944570 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-944570 cp multinode-944570:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile109421544/001/cp-test_multinode-944570.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-944570 ssh -n multinode-944570 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-944570 cp multinode-944570:/home/docker/cp-test.txt multinode-944570-m02:/home/docker/cp-test_multinode-944570_multinode-944570-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-944570 ssh -n multinode-944570 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-944570 ssh -n multinode-944570-m02 "sudo cat /home/docker/cp-test_multinode-944570_multinode-944570-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-944570 cp multinode-944570:/home/docker/cp-test.txt multinode-944570-m03:/home/docker/cp-test_multinode-944570_multinode-944570-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-944570 ssh -n multinode-944570 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-944570 ssh -n multinode-944570-m03 "sudo cat /home/docker/cp-test_multinode-944570_multinode-944570-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-944570 cp testdata/cp-test.txt multinode-944570-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-944570 ssh -n multinode-944570-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-944570 cp multinode-944570-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile109421544/001/cp-test_multinode-944570-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-944570 ssh -n multinode-944570-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-944570 cp multinode-944570-m02:/home/docker/cp-test.txt multinode-944570:/home/docker/cp-test_multinode-944570-m02_multinode-944570.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-944570 ssh -n multinode-944570-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-944570 ssh -n multinode-944570 "sudo cat /home/docker/cp-test_multinode-944570-m02_multinode-944570.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-944570 cp multinode-944570-m02:/home/docker/cp-test.txt multinode-944570-m03:/home/docker/cp-test_multinode-944570-m02_multinode-944570-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-944570 ssh -n multinode-944570-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-944570 ssh -n multinode-944570-m03 "sudo cat /home/docker/cp-test_multinode-944570-m02_multinode-944570-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-944570 cp testdata/cp-test.txt multinode-944570-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-944570 ssh -n multinode-944570-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-944570 cp multinode-944570-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile109421544/001/cp-test_multinode-944570-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-944570 ssh -n multinode-944570-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-944570 cp multinode-944570-m03:/home/docker/cp-test.txt multinode-944570:/home/docker/cp-test_multinode-944570-m03_multinode-944570.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-944570 ssh -n multinode-944570-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-944570 ssh -n multinode-944570 "sudo cat /home/docker/cp-test_multinode-944570-m03_multinode-944570.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-944570 cp multinode-944570-m03:/home/docker/cp-test.txt multinode-944570-m02:/home/docker/cp-test_multinode-944570-m03_multinode-944570-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-944570 ssh -n multinode-944570-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-944570 ssh -n multinode-944570-m02 "sudo cat /home/docker/cp-test_multinode-944570-m03_multinode-944570-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.31s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-944570 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-944570 node stop m03: (3.081754446s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-944570 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-944570 status: exit status 7 (442.14761ms)

                                                
                                                
-- stdout --
	multinode-944570
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-944570-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-944570-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-944570 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-944570 status --alsologtostderr: exit status 7 (443.127522ms)

                                                
                                                
-- stdout --
	multinode-944570
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-944570-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-944570-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0830 20:28:06.128546  244253 out.go:296] Setting OutFile to fd 1 ...
	I0830 20:28:06.128674  244253 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 20:28:06.128685  244253 out.go:309] Setting ErrFile to fd 2...
	I0830 20:28:06.128691  244253 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 20:28:06.128922  244253 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17145-222139/.minikube/bin
	I0830 20:28:06.129096  244253 out.go:303] Setting JSON to false
	I0830 20:28:06.129146  244253 mustload.go:65] Loading cluster: multinode-944570
	I0830 20:28:06.129239  244253 notify.go:220] Checking for updates...
	I0830 20:28:06.129557  244253 config.go:182] Loaded profile config "multinode-944570": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0830 20:28:06.129574  244253 status.go:255] checking status of multinode-944570 ...
	I0830 20:28:06.129919  244253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0830 20:28:06.129994  244253 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 20:28:06.145573  244253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40449
	I0830 20:28:06.145990  244253 main.go:141] libmachine: () Calling .GetVersion
	I0830 20:28:06.146527  244253 main.go:141] libmachine: Using API Version  1
	I0830 20:28:06.146547  244253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 20:28:06.146891  244253 main.go:141] libmachine: () Calling .GetMachineName
	I0830 20:28:06.147089  244253 main.go:141] libmachine: (multinode-944570) Calling .GetState
	I0830 20:28:06.148765  244253 status.go:330] multinode-944570 host status = "Running" (err=<nil>)
	I0830 20:28:06.148782  244253 host.go:66] Checking if "multinode-944570" exists ...
	I0830 20:28:06.149044  244253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0830 20:28:06.149085  244253 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 20:28:06.166782  244253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37553
	I0830 20:28:06.167227  244253 main.go:141] libmachine: () Calling .GetVersion
	I0830 20:28:06.167788  244253 main.go:141] libmachine: Using API Version  1
	I0830 20:28:06.167825  244253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 20:28:06.168128  244253 main.go:141] libmachine: () Calling .GetMachineName
	I0830 20:28:06.168293  244253 main.go:141] libmachine: (multinode-944570) Calling .GetIP
	I0830 20:28:06.170935  244253 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:28:06.171347  244253 main.go:141] libmachine: (multinode-944570) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:42:84", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:24:53 +0000 UTC Type:0 Mac:52:54:00:50:42:84 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-944570 Clientid:01:52:54:00:50:42:84}
	I0830 20:28:06.171390  244253 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined IP address 192.168.39.254 and MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:28:06.171549  244253 host.go:66] Checking if "multinode-944570" exists ...
	I0830 20:28:06.171832  244253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0830 20:28:06.171865  244253 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 20:28:06.188055  244253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42011
	I0830 20:28:06.188443  244253 main.go:141] libmachine: () Calling .GetVersion
	I0830 20:28:06.188890  244253 main.go:141] libmachine: Using API Version  1
	I0830 20:28:06.188915  244253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 20:28:06.189238  244253 main.go:141] libmachine: () Calling .GetMachineName
	I0830 20:28:06.189413  244253 main.go:141] libmachine: (multinode-944570) Calling .DriverName
	I0830 20:28:06.189605  244253 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0830 20:28:06.189632  244253 main.go:141] libmachine: (multinode-944570) Calling .GetSSHHostname
	I0830 20:28:06.192401  244253 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:28:06.192796  244253 main.go:141] libmachine: (multinode-944570) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:42:84", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:24:53 +0000 UTC Type:0 Mac:52:54:00:50:42:84 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-944570 Clientid:01:52:54:00:50:42:84}
	I0830 20:28:06.192817  244253 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined IP address 192.168.39.254 and MAC address 52:54:00:50:42:84 in network mk-multinode-944570
	I0830 20:28:06.192962  244253 main.go:141] libmachine: (multinode-944570) Calling .GetSSHPort
	I0830 20:28:06.193146  244253 main.go:141] libmachine: (multinode-944570) Calling .GetSSHKeyPath
	I0830 20:28:06.193324  244253 main.go:141] libmachine: (multinode-944570) Calling .GetSSHUsername
	I0830 20:28:06.193460  244253 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570/id_rsa Username:docker}
	I0830 20:28:06.287925  244253 ssh_runner.go:195] Run: systemctl --version
	I0830 20:28:06.293435  244253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 20:28:06.306764  244253 kubeconfig.go:92] found "multinode-944570" server: "https://192.168.39.254:8443"
	I0830 20:28:06.306793  244253 api_server.go:166] Checking apiserver status ...
	I0830 20:28:06.306823  244253 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 20:28:06.318380  244253 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1820/cgroup
	I0830 20:28:06.326420  244253 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/pod5c113dc76381297356051f3bc6bc6fd1/adc09d4d4deb205b07e6727d1d4bbe19ed9e49681a95f63c5b45b66f3a74a387"
	I0830 20:28:06.326479  244253 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod5c113dc76381297356051f3bc6bc6fd1/adc09d4d4deb205b07e6727d1d4bbe19ed9e49681a95f63c5b45b66f3a74a387/freezer.state
	I0830 20:28:06.334235  244253 api_server.go:204] freezer state: "THAWED"
	I0830 20:28:06.334263  244253 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0830 20:28:06.340891  244253 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0830 20:28:06.340918  244253 status.go:421] multinode-944570 apiserver status = Running (err=<nil>)
	I0830 20:28:06.340928  244253 status.go:257] multinode-944570 status: &{Name:multinode-944570 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0830 20:28:06.340946  244253 status.go:255] checking status of multinode-944570-m02 ...
	I0830 20:28:06.341230  244253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0830 20:28:06.341265  244253 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 20:28:06.357423  244253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40141
	I0830 20:28:06.357803  244253 main.go:141] libmachine: () Calling .GetVersion
	I0830 20:28:06.358312  244253 main.go:141] libmachine: Using API Version  1
	I0830 20:28:06.358337  244253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 20:28:06.358663  244253 main.go:141] libmachine: () Calling .GetMachineName
	I0830 20:28:06.358848  244253 main.go:141] libmachine: (multinode-944570-m02) Calling .GetState
	I0830 20:28:06.360510  244253 status.go:330] multinode-944570-m02 host status = "Running" (err=<nil>)
	I0830 20:28:06.360536  244253 host.go:66] Checking if "multinode-944570-m02" exists ...
	I0830 20:28:06.360808  244253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0830 20:28:06.360834  244253 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 20:28:06.376796  244253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36235
	I0830 20:28:06.377195  244253 main.go:141] libmachine: () Calling .GetVersion
	I0830 20:28:06.377757  244253 main.go:141] libmachine: Using API Version  1
	I0830 20:28:06.377785  244253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 20:28:06.378116  244253 main.go:141] libmachine: () Calling .GetMachineName
	I0830 20:28:06.378319  244253 main.go:141] libmachine: (multinode-944570-m02) Calling .GetIP
	I0830 20:28:06.381228  244253 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
	I0830 20:28:06.381647  244253 main.go:141] libmachine: (multinode-944570-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:a1:9d", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:26:09 +0000 UTC Type:0 Mac:52:54:00:c1:a1:9d Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-944570-m02 Clientid:01:52:54:00:c1:a1:9d}
	I0830 20:28:06.381674  244253 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
	I0830 20:28:06.381812  244253 host.go:66] Checking if "multinode-944570-m02" exists ...
	I0830 20:28:06.382252  244253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0830 20:28:06.382305  244253 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 20:28:06.396942  244253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33951
	I0830 20:28:06.397365  244253 main.go:141] libmachine: () Calling .GetVersion
	I0830 20:28:06.397975  244253 main.go:141] libmachine: Using API Version  1
	I0830 20:28:06.398009  244253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 20:28:06.398364  244253 main.go:141] libmachine: () Calling .GetMachineName
	I0830 20:28:06.398581  244253 main.go:141] libmachine: (multinode-944570-m02) Calling .DriverName
	I0830 20:28:06.398766  244253 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0830 20:28:06.398793  244253 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHHostname
	I0830 20:28:06.401690  244253 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
	I0830 20:28:06.402103  244253 main.go:141] libmachine: (multinode-944570-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:a1:9d", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:26:09 +0000 UTC Type:0 Mac:52:54:00:c1:a1:9d Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-944570-m02 Clientid:01:52:54:00:c1:a1:9d}
	I0830 20:28:06.402144  244253 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
	I0830 20:28:06.402288  244253 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHPort
	I0830 20:28:06.402477  244253 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHKeyPath
	I0830 20:28:06.402651  244253 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHUsername
	I0830 20:28:06.402800  244253 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570-m02/id_rsa Username:docker}
	I0830 20:28:06.494973  244253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 20:28:06.508313  244253 status.go:257] multinode-944570-m02 status: &{Name:multinode-944570-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0830 20:28:06.508349  244253 status.go:255] checking status of multinode-944570-m03 ...
	I0830 20:28:06.508657  244253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0830 20:28:06.508699  244253 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 20:28:06.524747  244253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44355
	I0830 20:28:06.525166  244253 main.go:141] libmachine: () Calling .GetVersion
	I0830 20:28:06.525800  244253 main.go:141] libmachine: Using API Version  1
	I0830 20:28:06.525818  244253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 20:28:06.526215  244253 main.go:141] libmachine: () Calling .GetMachineName
	I0830 20:28:06.526418  244253 main.go:141] libmachine: (multinode-944570-m03) Calling .GetState
	I0830 20:28:06.528058  244253 status.go:330] multinode-944570-m03 host status = "Stopped" (err=<nil>)
	I0830 20:28:06.528074  244253 status.go:343] host is not running, skipping remaining checks
	I0830 20:28:06.528081  244253 status.go:257] multinode-944570-m03 status: &{Name:multinode-944570-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.97s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (259.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-944570
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-944570
E0830 20:29:23.288092  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/ingress-addon-legacy-290437/client.crt: no such file or directory
E0830 20:29:50.972492  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/ingress-addon-legacy-290437/client.crt: no such file or directory
E0830 20:29:58.662785  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/addons-120922/client.crt: no such file or directory
multinode_test.go:290: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-944570: (1m55.521626035s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-944570 --wait=true -v=8 --alsologtostderr
E0830 20:30:43.392808  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/functional-037297/client.crt: no such file or directory
E0830 20:31:21.712110  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/addons-120922/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-944570 --wait=true -v=8 --alsologtostderr: (2m24.251574052s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-944570
--- PASS: TestMultiNode/serial/RestartKeepsNodes (259.86s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-944570 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-944570 node delete m03: (1.194815577s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-944570 status --alsologtostderr
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.73s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (25.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-944570 stop
multinode_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p multinode-944570 stop: (25.36043819s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-944570 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-944570 status: exit status 7 (76.827531ms)

                                                
                                                
-- stdout --
	multinode-944570
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-944570-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-amd64 -p multinode-944570 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-944570 status --alsologtostderr: exit status 7 (75.780997ms)

                                                
                                                
-- stdout --
	multinode-944570
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-944570-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0830 20:33:14.255556  245903 out.go:296] Setting OutFile to fd 1 ...
	I0830 20:33:14.255707  245903 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 20:33:14.255713  245903 out.go:309] Setting ErrFile to fd 2...
	I0830 20:33:14.255720  245903 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 20:33:14.255941  245903 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17145-222139/.minikube/bin
	I0830 20:33:14.256122  245903 out.go:303] Setting JSON to false
	I0830 20:33:14.256155  245903 mustload.go:65] Loading cluster: multinode-944570
	I0830 20:33:14.256268  245903 notify.go:220] Checking for updates...
	I0830 20:33:14.256623  245903 config.go:182] Loaded profile config "multinode-944570": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0830 20:33:14.256637  245903 status.go:255] checking status of multinode-944570 ...
	I0830 20:33:14.257058  245903 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0830 20:33:14.257147  245903 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 20:33:14.271689  245903 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34031
	I0830 20:33:14.272072  245903 main.go:141] libmachine: () Calling .GetVersion
	I0830 20:33:14.272607  245903 main.go:141] libmachine: Using API Version  1
	I0830 20:33:14.272630  245903 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 20:33:14.272945  245903 main.go:141] libmachine: () Calling .GetMachineName
	I0830 20:33:14.273129  245903 main.go:141] libmachine: (multinode-944570) Calling .GetState
	I0830 20:33:14.274601  245903 status.go:330] multinode-944570 host status = "Stopped" (err=<nil>)
	I0830 20:33:14.274619  245903 status.go:343] host is not running, skipping remaining checks
	I0830 20:33:14.274625  245903 status.go:257] multinode-944570 status: &{Name:multinode-944570 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0830 20:33:14.274643  245903 status.go:255] checking status of multinode-944570-m02 ...
	I0830 20:33:14.274913  245903 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0830 20:33:14.274949  245903 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 20:33:14.288772  245903 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36573
	I0830 20:33:14.289168  245903 main.go:141] libmachine: () Calling .GetVersion
	I0830 20:33:14.289632  245903 main.go:141] libmachine: Using API Version  1
	I0830 20:33:14.289656  245903 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 20:33:14.289955  245903 main.go:141] libmachine: () Calling .GetMachineName
	I0830 20:33:14.290131  245903 main.go:141] libmachine: (multinode-944570-m02) Calling .GetState
	I0830 20:33:14.291527  245903 status.go:330] multinode-944570-m02 host status = "Stopped" (err=<nil>)
	I0830 20:33:14.291540  245903 status.go:343] host is not running, skipping remaining checks
	I0830 20:33:14.291545  245903 status.go:257] multinode-944570-m02 status: &{Name:multinode-944570-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (25.51s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (100.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-944570 --wait=true -v=8 --alsologtostderr --driver=kvm2 
E0830 20:34:23.287448  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/ingress-addon-legacy-290437/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-944570 --wait=true -v=8 --alsologtostderr --driver=kvm2 : (1m40.274406481s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-944570 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (100.84s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (49.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-944570
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-944570-m02 --driver=kvm2 
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-944570-m02 --driver=kvm2 : exit status 14 (62.026744ms)

                                                
                                                
-- stdout --
	* [multinode-944570-m02] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17145
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17145-222139/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17145-222139/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-944570-m02' is duplicated with machine name 'multinode-944570-m02' in profile 'multinode-944570'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-944570-m03 --driver=kvm2 
E0830 20:34:58.659629  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/addons-120922/client.crt: no such file or directory
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-944570-m03 --driver=kvm2 : (47.931452887s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-944570
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-944570: exit status 80 (215.925469ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-944570
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-944570-m03 already exists in multinode-944570-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-944570-m03
E0830 20:35:43.393512  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/functional-037297/client.crt: no such file or directory
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-944570-m03: (1.005952623s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (49.26s)

                                                
                                    
x
+
TestPreload (232.82s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-327905 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4
E0830 20:37:06.442243  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/functional-037297/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-327905 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4: (2m17.082137941s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-327905 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-327905 image pull gcr.io/k8s-minikube/busybox: (2.184171032s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-327905
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-327905: (13.098617563s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-327905 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 
E0830 20:39:23.287357  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/ingress-addon-legacy-290437/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-327905 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 : (1m19.194758347s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-327905 image list
helpers_test.go:175: Cleaning up "test-preload-327905" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-327905
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-327905: (1.058461519s)
--- PASS: TestPreload (232.82s)

                                                
                                    
x
+
TestScheduledStopUnix (124.41s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-377929 --memory=2048 --driver=kvm2 
E0830 20:39:58.662954  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/addons-120922/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-377929 --memory=2048 --driver=kvm2 : (52.832984669s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-377929 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-377929 -n scheduled-stop-377929
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-377929 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-377929 --cancel-scheduled
E0830 20:40:43.393436  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/functional-037297/client.crt: no such file or directory
E0830 20:40:46.334361  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/ingress-addon-legacy-290437/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-377929 -n scheduled-stop-377929
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-377929
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-377929 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-377929
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-377929: exit status 7 (60.229111ms)

                                                
                                                
-- stdout --
	scheduled-stop-377929
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-377929 -n scheduled-stop-377929
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-377929 -n scheduled-stop-377929: exit status 7 (59.630992ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-377929" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-377929
--- PASS: TestScheduledStopUnix (124.41s)

                                                
                                    
x
+
TestSkaffold (140.19s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe1126422030 version
skaffold_test.go:63: skaffold version: v2.6.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-318104 --memory=2600 --driver=kvm2 
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-318104 --memory=2600 --driver=kvm2 : (49.478720895s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe1126422030 run --minikube-profile skaffold-318104 --kube-context skaffold-318104 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe1126422030 run --minikube-profile skaffold-318104 --kube-context skaffold-318104 --status-check=true --port-forward=false --interactive=false: (1m16.677352905s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-68975d954c-l7ctc" [e89bb264-5b8c-49d9-8d8e-1e735c928a22] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.017143715s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-647ccdc4fb-h97p8" [dba4fbbc-fd62-4476-84df-3e5dcfe9ab29] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.009682826s
helpers_test.go:175: Cleaning up "skaffold-318104" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-318104
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-318104: (1.181587669s)
--- PASS: TestSkaffold (140.19s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (266.73s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:132: (dbg) Run:  /tmp/minikube-v1.6.2.2417639827.exe start -p running-upgrade-577784 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:132: (dbg) Done: /tmp/minikube-v1.6.2.2417639827.exe start -p running-upgrade-577784 --memory=2200 --vm-driver=kvm2 : (2m6.110213808s)
version_upgrade_test.go:142: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-577784 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
E0830 20:49:04.277412  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/skaffold-318104/client.crt: no such file or directory
E0830 20:49:14.518361  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/skaffold-318104/client.crt: no such file or directory
version_upgrade_test.go:142: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-577784 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (2m16.772863865s)
helpers_test.go:175: Cleaning up "running-upgrade-577784" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-577784
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-577784: (1.786360347s)
--- PASS: TestRunningBinaryUpgrade (266.73s)

                                                
                                    
x
+
TestKubernetesUpgrade (174.9s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-030289 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2 
E0830 20:48:01.712460  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/addons-120922/client.crt: no such file or directory
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-030289 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2 : (1m6.636110213s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-030289
version_upgrade_test.go:239: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-030289: (13.340218259s)
version_upgrade_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-030289 status --format={{.Host}}
version_upgrade_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-030289 status --format={{.Host}}: exit status 7 (94.73767ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:246: status error: exit status 7 (may be ok)
version_upgrade_test.go:255: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-030289 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:255: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-030289 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=kvm2 : (1m4.299130734s)
version_upgrade_test.go:260: (dbg) Run:  kubectl --context kubernetes-upgrade-030289 version --output=json
version_upgrade_test.go:279: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:281: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-030289 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2 
version_upgrade_test.go:281: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-030289 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2 : exit status 106 (125.902869ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-030289] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17145
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17145-222139/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17145-222139/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.1 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-030289
	    minikube start -p kubernetes-upgrade-030289 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0302892 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.1, by running:
	    
	    minikube start -p kubernetes-upgrade-030289 --kubernetes-version=v1.28.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:285: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:287: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-030289 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:287: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-030289 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=kvm2 : (29.143123621s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-030289" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-030289
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-030289: (1.175336796s)
--- PASS: TestKubernetesUpgrade (174.90s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.79s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.79s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (204.81s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:195: (dbg) Run:  /tmp/minikube-v1.6.2.1650191595.exe start -p stopped-upgrade-512876 --memory=2200 --vm-driver=kvm2 
E0830 20:49:34.998557  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/skaffold-318104/client.crt: no such file or directory
version_upgrade_test.go:195: (dbg) Done: /tmp/minikube-v1.6.2.1650191595.exe start -p stopped-upgrade-512876 --memory=2200 --vm-driver=kvm2 : (1m58.942440604s)
version_upgrade_test.go:204: (dbg) Run:  /tmp/minikube-v1.6.2.1650191595.exe -p stopped-upgrade-512876 stop
E0830 20:51:37.695229  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/gvisor-450430/client.crt: no such file or directory
E0830 20:51:37.700553  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/gvisor-450430/client.crt: no such file or directory
E0830 20:51:37.710849  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/gvisor-450430/client.crt: no such file or directory
E0830 20:51:37.731201  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/gvisor-450430/client.crt: no such file or directory
E0830 20:51:37.771582  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/gvisor-450430/client.crt: no such file or directory
E0830 20:51:37.852028  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/gvisor-450430/client.crt: no such file or directory
E0830 20:51:37.880328  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/skaffold-318104/client.crt: no such file or directory
E0830 20:51:38.012539  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/gvisor-450430/client.crt: no such file or directory
E0830 20:51:38.333090  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/gvisor-450430/client.crt: no such file or directory
E0830 20:51:38.973295  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/gvisor-450430/client.crt: no such file or directory
E0830 20:51:40.253572  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/gvisor-450430/client.crt: no such file or directory
version_upgrade_test.go:204: (dbg) Done: /tmp/minikube-v1.6.2.1650191595.exe -p stopped-upgrade-512876 stop: (13.414029338s)
version_upgrade_test.go:210: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-512876 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
E0830 20:51:42.814225  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/gvisor-450430/client.crt: no such file or directory
E0830 20:51:47.934897  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/gvisor-450430/client.crt: no such file or directory
E0830 20:51:58.176113  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/gvisor-450430/client.crt: no such file or directory
version_upgrade_test.go:210: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-512876 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m12.451238487s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (204.81s)

                                                
                                    
x
+
TestPause/serial/Start (88.58s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-236097 --memory=2048 --install-addons=false --wait=all --driver=kvm2 
E0830 20:50:15.959720  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/skaffold-318104/client.crt: no such file or directory
E0830 20:50:43.393545  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/functional-037297/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-236097 --memory=2048 --install-addons=false --wait=all --driver=kvm2 : (1m28.576677995s)
--- PASS: TestPause/serial/Start (88.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-155751 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-155751 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 : exit status 14 (83.278083ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-155751] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17145
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17145-222139/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17145-222139/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (59.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-155751 --driver=kvm2 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-155751 --driver=kvm2 : (59.593960994s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-155751 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (59.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (82.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-145859 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-145859 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 : (1m22.408534054s)
--- PASS: TestNetworkPlugins/group/auto/Start (82.41s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (53.06s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-236097 --alsologtostderr -v=1 --driver=kvm2 
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-236097 --alsologtostderr -v=1 --driver=kvm2 : (53.02830345s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (53.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (42.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-155751 --no-kubernetes --driver=kvm2 
E0830 20:52:18.656302  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/gvisor-450430/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-155751 --no-kubernetes --driver=kvm2 : (40.737337711s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-155751 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-155751 status -o json: exit status 2 (247.205065ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-155751","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-155751
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-155751: (1.610930995s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (42.60s)

                                                
                                    
x
+
TestPause/serial/Pause (0.66s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-236097 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.66s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.27s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-236097 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-236097 --output=json --layout=cluster: exit status 2 (273.009953ms)

                                                
                                                
-- stdout --
	{"Name":"pause-236097","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-236097","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.27s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.69s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-236097 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.69s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-236097 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-236097 --alsologtostderr -v=5: (1.000328846s)
--- PASS: TestPause/serial/PauseAgain (1.00s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.33s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-236097 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-236097 --alsologtostderr -v=5: (1.329057871s)
--- PASS: TestPause/serial/DeletePaused (1.33s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.67s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (78.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-145859 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-145859 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 : (1m18.146550874s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (78.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-145859 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-145859 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-mqstb" [63fae803-fa77-4f84-a6ac-3a5a5c9f3a80] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-mqstb" [63fae803-fa77-4f84-a6ac-3a5a5c9f3a80] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.013130838s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (49.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-155751 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-155751 --no-kubernetes --driver=kvm2 : (49.298754459s)
--- PASS: TestNoKubernetes/serial/Start (49.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.29s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:218: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-512876
version_upgrade_test.go:218: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-512876: (1.288526894s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-145859 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-145859 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-145859 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (130.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-145859 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 
E0830 20:52:59.616482  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/gvisor-450430/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-145859 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 : (2m10.402147856s)
--- PASS: TestNetworkPlugins/group/calico/Start (130.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (127.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-145859 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-145859 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 : (2m7.899353148s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (127.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-155751 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-155751 "sudo systemctl is-active --quiet service kubelet": exit status 1 (208.989835ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-155751
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-155751: (2.169704853s)
--- PASS: TestNoKubernetes/serial/Stop (2.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (79.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-155751 --driver=kvm2 
E0830 20:53:46.443490  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/functional-037297/client.crt: no such file or directory
E0830 20:53:54.037544  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/skaffold-318104/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-155751 --driver=kvm2 : (1m19.685711718s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (79.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-fqnnx" [bc138583-582b-4f85-a1cc-7c5e78d19645] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.022075286s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-145859 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-145859 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-c2d4k" [37d54be7-5676-4f28-aa40-d2bbd1f1ba70] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-c2d4k" [37d54be7-5676-4f28-aa40-d2bbd1f1ba70] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 13.016142488s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-145859 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-145859 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-145859 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (96.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-145859 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-145859 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 : (1m36.708212229s)
--- PASS: TestNetworkPlugins/group/false/Start (96.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-155751 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-155751 "sudo systemctl is-active --quiet service kubelet": exit status 1 (229.731617ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (98.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-145859 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-145859 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 : (1m38.654768551s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (98.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-h2shw" [932d7e8c-b4ff-422e-b9ac-a91497d585eb] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.042265709s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-145859 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-145859 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-b656g" [c5afcf73-92c2-4678-9330-6b762a32c327] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-b656g" [c5afcf73-92c2-4678-9330-6b762a32c327] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.084856977s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-145859 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-145859 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-txqr7" [7e4da75c-9977-42e7-91c3-0a7d2816f356] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-txqr7" [7e4da75c-9977-42e7-91c3-0a7d2816f356] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.012960358s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-145859 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-145859 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-145859 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-145859 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-145859 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-145859 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (89.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-145859 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-145859 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 : (1m29.468383492s)
--- PASS: TestNetworkPlugins/group/flannel/Start (89.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (100.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-145859 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-145859 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 : (1m40.716163546s)
--- PASS: TestNetworkPlugins/group/bridge/Start (100.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-145859 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-145859 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-p2ntl" [69efb6f7-4b42-4159-a8c6-b1ed879be9ab] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-p2ntl" [69efb6f7-4b42-4159-a8c6-b1ed879be9ab] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.013417051s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-145859 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-145859 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-145859 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-145859 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-145859 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-xqpmz" [73624449-f2e5-481b-91ab-5f2c1413450d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-xqpmz" [73624449-f2e5-481b-91ab-5f2c1413450d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.022854145s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (110.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-145859 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-145859 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 : (1m50.442304217s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (110.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-145859 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-145859 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-145859 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (147.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-899372 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-899372 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0: (2m27.688858789s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (147.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-7n8qh" [fc63d70f-14df-4120-8bc7-7a14283937a7] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.024687927s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-145859 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (14.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-145859 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-87bhr" [5646c566-f224-4c03-830e-f551f45c9bfa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0830 20:57:26.334869  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/ingress-addon-legacy-290437/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-87bhr" [5646c566-f224-4c03-830e-f551f45c9bfa] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 14.011589265s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (14.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-145859 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-145859 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-244x9" [3841addc-8dea-4a09-9f5e-8373e3e46d3f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-244x9" [3841addc-8dea-4a09-9f5e-8373e3e46d3f] Running
E0830 20:57:43.774526  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/auto-145859/client.crt: no such file or directory
E0830 20:57:43.781786  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/auto-145859/client.crt: no such file or directory
E0830 20:57:43.792117  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/auto-145859/client.crt: no such file or directory
E0830 20:57:43.812514  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/auto-145859/client.crt: no such file or directory
E0830 20:57:43.852967  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/auto-145859/client.crt: no such file or directory
E0830 20:57:43.933627  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/auto-145859/client.crt: no such file or directory
E0830 20:57:44.094603  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/auto-145859/client.crt: no such file or directory
E0830 20:57:44.414728  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/auto-145859/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.017242985s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-145859 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-145859 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-145859 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-145859 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-145859 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E0830 20:57:45.054901  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/auto-145859/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-145859 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (93.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-706840 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-706840 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.1: (1m33.220370366s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (93.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (97.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-991570 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.1
E0830 20:58:04.257748  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/auto-145859/client.crt: no such file or directory
E0830 20:58:24.738573  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/auto-145859/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-991570 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.1: (1m37.20660841s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (97.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-145859 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (14.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-145859 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-v6lqr" [95b0bf6f-7857-4cee-85c6-e21f560b24ff] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-v6lqr" [95b0bf6f-7857-4cee-85c6-e21f560b24ff] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 14.011755404s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (14.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-145859 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-145859 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-145859 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.17s)
E0830 21:04:11.562812  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/kubenet-145859/client.crt: no such file or directory
E0830 21:04:22.317595  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/enable-default-cni-145859/client.crt: no such file or directory
E0830 21:04:23.287341  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/ingress-addon-legacy-290437/client.crt: no such file or directory
E0830 21:04:24.538224  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/kindnet-145859/client.crt: no such file or directory
E0830 21:04:36.107235  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/old-k8s-version-899372/client.crt: no such file or directory
E0830 21:04:36.112534  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/old-k8s-version-899372/client.crt: no such file or directory
E0830 21:04:36.122776  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/old-k8s-version-899372/client.crt: no such file or directory
E0830 21:04:36.143077  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/old-k8s-version-899372/client.crt: no such file or directory
E0830 21:04:36.183357  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/old-k8s-version-899372/client.crt: no such file or directory
E0830 21:04:36.263708  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/old-k8s-version-899372/client.crt: no such file or directory
E0830 21:04:36.423884  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/old-k8s-version-899372/client.crt: no such file or directory
E0830 21:04:36.744458  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/old-k8s-version-899372/client.crt: no such file or directory
E0830 21:04:37.384941  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/old-k8s-version-899372/client.crt: no such file or directory
E0830 21:04:38.665092  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/old-k8s-version-899372/client.crt: no such file or directory
E0830 21:04:41.225541  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/old-k8s-version-899372/client.crt: no such file or directory
E0830 21:04:41.713375  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/addons-120922/client.crt: no such file or directory
E0830 21:04:46.346420  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/old-k8s-version-899372/client.crt: no such file or directory
E0830 21:04:52.523331  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/kubenet-145859/client.crt: no such file or directory
E0830 21:04:56.587007  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/old-k8s-version-899372/client.crt: no such file or directory
E0830 21:04:58.151914  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/flannel-145859/client.crt: no such file or directory
E0830 21:04:58.659781  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/addons-120922/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (72.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-385479 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.1
E0830 20:59:05.699410  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/auto-145859/client.crt: no such file or directory
E0830 20:59:07.095428  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/kindnet-145859/client.crt: no such file or directory
E0830 20:59:17.335958  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/kindnet-145859/client.crt: no such file or directory
E0830 20:59:23.286867  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/ingress-addon-legacy-290437/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-385479 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.1: (1m12.893447896s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (72.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.53s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-706840 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0468de3e-f267-497f-8286-a017f0e0f000] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0468de3e-f267-497f-8286-a017f0e0f000] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.044679046s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-706840 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (12.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-899372 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e0acdb5a-5f86-4c16-a859-2121ab7c4114] Pending
helpers_test.go:344: "busybox" [e0acdb5a-5f86-4c16-a859-2121ab7c4114] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0830 20:59:37.816525  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/kindnet-145859/client.crt: no such file or directory
helpers_test.go:344: "busybox" [e0acdb5a-5f86-4c16-a859-2121ab7c4114] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 12.046008265s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-899372 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (12.55s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-706840 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-706840 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.813509875s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-706840 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.61s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-991570 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [07d0d8e8-50a8-43f9-a2a6-b2f8c161f956] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [07d0d8e8-50a8-43f9-a2a6-b2f8c161f956] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.043271302s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-991570 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.61s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (13.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-706840 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-706840 --alsologtostderr -v=3: (13.197510298s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (13.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-899372 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-899372 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-899372 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-899372 --alsologtostderr -v=3: (13.140381687s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-991570 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-991570 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.085723668s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-991570 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-991570 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-991570 --alsologtostderr -v=3: (13.117902749s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-706840 -n no-preload-706840
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-706840 -n no-preload-706840: exit status 7 (75.246884ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-706840 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (308.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-706840 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.1
E0830 20:59:58.659106  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/addons-120922/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-706840 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.1: (5m8.092158327s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-706840 -n no-preload-706840
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (308.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-899372 -n old-k8s-version-899372
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-899372 -n old-k8s-version-899372: exit status 7 (71.541415ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-899372 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (72.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-899372 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-899372 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0: (1m12.374043515s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-899372 -n old-k8s-version-899372
E0830 21:01:15.134254  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/false-145859/client.crt: no such file or directory
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (72.63s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-991570 -n embed-certs-991570
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-991570 -n embed-certs-991570: exit status 7 (68.274784ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-991570 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (343.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-991570 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.1
E0830 21:00:07.654483  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/calico-145859/client.crt: no such file or directory
E0830 21:00:07.659858  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/calico-145859/client.crt: no such file or directory
E0830 21:00:07.670227  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/calico-145859/client.crt: no such file or directory
E0830 21:00:07.690358  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/calico-145859/client.crt: no such file or directory
E0830 21:00:07.730880  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/calico-145859/client.crt: no such file or directory
E0830 21:00:07.811349  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/calico-145859/client.crt: no such file or directory
E0830 21:00:07.971909  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/calico-145859/client.crt: no such file or directory
E0830 21:00:08.292598  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/calico-145859/client.crt: no such file or directory
E0830 21:00:08.933141  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/calico-145859/client.crt: no such file or directory
E0830 21:00:10.213638  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/calico-145859/client.crt: no such file or directory
E0830 21:00:12.774423  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/calico-145859/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-991570 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.1: (5m43.107844656s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-991570 -n embed-certs-991570
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (343.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-385479 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [62487766-812f-4cc2-ac03-37aa7df9616a] Pending
helpers_test.go:344: "busybox" [62487766-812f-4cc2-ac03-37aa7df9616a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0830 21:00:17.895261  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/calico-145859/client.crt: no such file or directory
E0830 21:00:18.777256  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/kindnet-145859/client.crt: no such file or directory
E0830 21:00:21.303158  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/custom-flannel-145859/client.crt: no such file or directory
E0830 21:00:21.308497  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/custom-flannel-145859/client.crt: no such file or directory
E0830 21:00:21.319527  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/custom-flannel-145859/client.crt: no such file or directory
E0830 21:00:21.339944  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/custom-flannel-145859/client.crt: no such file or directory
E0830 21:00:21.380401  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/custom-flannel-145859/client.crt: no such file or directory
E0830 21:00:21.460796  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/custom-flannel-145859/client.crt: no such file or directory
E0830 21:00:21.621420  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/custom-flannel-145859/client.crt: no such file or directory
E0830 21:00:21.941876  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/custom-flannel-145859/client.crt: no such file or directory
helpers_test.go:344: "busybox" [62487766-812f-4cc2-ac03-37aa7df9616a] Running
E0830 21:00:22.582694  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/custom-flannel-145859/client.crt: no such file or directory
E0830 21:00:23.863859  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/custom-flannel-145859/client.crt: no such file or directory
E0830 21:00:26.424412  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/custom-flannel-145859/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.034132303s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-385479 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-385479 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0830 21:00:27.620150  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/auto-145859/client.crt: no such file or directory
E0830 21:00:28.136387  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/calico-145859/client.crt: no such file or directory
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-385479 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-385479 --alsologtostderr -v=3
E0830 21:00:31.544962  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/custom-flannel-145859/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-385479 --alsologtostderr -v=3: (13.137201728s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-385479 -n default-k8s-diff-port-385479
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-385479 -n default-k8s-diff-port-385479: exit status 7 (90.819379ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-385479 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0830 21:00:41.785489  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/custom-flannel-145859/client.crt: no such file or directory
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (354.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-385479 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.1
E0830 21:00:43.393563  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/functional-037297/client.crt: no such file or directory
E0830 21:00:48.617368  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/calico-145859/client.crt: no such file or directory
E0830 21:01:02.266680  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/custom-flannel-145859/client.crt: no such file or directory
E0830 21:01:10.014369  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/false-145859/client.crt: no such file or directory
E0830 21:01:10.019552  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/false-145859/client.crt: no such file or directory
E0830 21:01:10.030161  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/false-145859/client.crt: no such file or directory
E0830 21:01:10.050488  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/false-145859/client.crt: no such file or directory
E0830 21:01:10.090729  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/false-145859/client.crt: no such file or directory
E0830 21:01:10.171284  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/false-145859/client.crt: no such file or directory
E0830 21:01:10.331675  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/false-145859/client.crt: no such file or directory
E0830 21:01:10.652757  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/false-145859/client.crt: no such file or directory
E0830 21:01:11.292943  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/false-145859/client.crt: no such file or directory
E0830 21:01:12.573274  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/false-145859/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-385479 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.1: (5m54.545746034s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-385479 -n default-k8s-diff-port-385479
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (354.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (25.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0830 21:01:20.255232  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/false-145859/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-2ht5v" [179835d8-23b4-4dee-861a-192c49bd2fa1] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0830 21:01:29.577576  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/calico-145859/client.crt: no such file or directory
E0830 21:01:30.496156  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/false-145859/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-2ht5v" [179835d8-23b4-4dee-861a-192c49bd2fa1] Running
E0830 21:01:37.695083  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/gvisor-450430/client.crt: no such file or directory
E0830 21:01:38.472984  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/enable-default-cni-145859/client.crt: no such file or directory
E0830 21:01:38.478260  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/enable-default-cni-145859/client.crt: no such file or directory
E0830 21:01:38.488498  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/enable-default-cni-145859/client.crt: no such file or directory
E0830 21:01:38.508753  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/enable-default-cni-145859/client.crt: no such file or directory
E0830 21:01:38.549109  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/enable-default-cni-145859/client.crt: no such file or directory
E0830 21:01:38.629858  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/enable-default-cni-145859/client.crt: no such file or directory
E0830 21:01:38.790460  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/enable-default-cni-145859/client.crt: no such file or directory
E0830 21:01:39.111215  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/enable-default-cni-145859/client.crt: no such file or directory
E0830 21:01:39.752422  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/enable-default-cni-145859/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 25.017244232s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (25.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-2ht5v" [179835d8-23b4-4dee-861a-192c49bd2fa1] Running
E0830 21:01:40.697631  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/kindnet-145859/client.crt: no such file or directory
E0830 21:01:41.033394  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/enable-default-cni-145859/client.crt: no such file or directory
E0830 21:01:43.227668  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/custom-flannel-145859/client.crt: no such file or directory
E0830 21:01:43.594640  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/enable-default-cni-145859/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01328996s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-899372 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-899372 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-899372 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-899372 -n old-k8s-version-899372
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-899372 -n old-k8s-version-899372: exit status 2 (318.878047ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-899372 -n old-k8s-version-899372
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-899372 -n old-k8s-version-899372: exit status 2 (308.389297ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-899372 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-899372 -n old-k8s-version-899372
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-899372 -n old-k8s-version-899372
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.94s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (73.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-263591 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.1
E0830 21:01:50.977087  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/false-145859/client.crt: no such file or directory
E0830 21:01:58.956200  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/enable-default-cni-145859/client.crt: no such file or directory
E0830 21:02:14.308337  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/flannel-145859/client.crt: no such file or directory
E0830 21:02:14.313672  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/flannel-145859/client.crt: no such file or directory
E0830 21:02:14.323974  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/flannel-145859/client.crt: no such file or directory
E0830 21:02:14.344261  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/flannel-145859/client.crt: no such file or directory
E0830 21:02:14.384548  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/flannel-145859/client.crt: no such file or directory
E0830 21:02:14.464832  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/flannel-145859/client.crt: no such file or directory
E0830 21:02:14.625117  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/flannel-145859/client.crt: no such file or directory
E0830 21:02:14.945904  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/flannel-145859/client.crt: no such file or directory
E0830 21:02:15.586443  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/flannel-145859/client.crt: no such file or directory
E0830 21:02:16.867702  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/flannel-145859/client.crt: no such file or directory
E0830 21:02:19.428297  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/flannel-145859/client.crt: no such file or directory
E0830 21:02:19.436396  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/enable-default-cni-145859/client.crt: no such file or directory
E0830 21:02:24.549486  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/flannel-145859/client.crt: no such file or directory
E0830 21:02:31.937335  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/false-145859/client.crt: no such file or directory
E0830 21:02:32.637604  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/bridge-145859/client.crt: no such file or directory
E0830 21:02:32.642894  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/bridge-145859/client.crt: no such file or directory
E0830 21:02:32.653153  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/bridge-145859/client.crt: no such file or directory
E0830 21:02:32.673442  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/bridge-145859/client.crt: no such file or directory
E0830 21:02:32.713735  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/bridge-145859/client.crt: no such file or directory
E0830 21:02:32.794894  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/bridge-145859/client.crt: no such file or directory
E0830 21:02:32.955360  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/bridge-145859/client.crt: no such file or directory
E0830 21:02:33.275983  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/bridge-145859/client.crt: no such file or directory
E0830 21:02:33.916177  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/bridge-145859/client.crt: no such file or directory
E0830 21:02:34.790180  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/flannel-145859/client.crt: no such file or directory
E0830 21:02:35.196675  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/bridge-145859/client.crt: no such file or directory
E0830 21:02:37.757553  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/bridge-145859/client.crt: no such file or directory
E0830 21:02:42.878529  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/bridge-145859/client.crt: no such file or directory
E0830 21:02:43.774982  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/auto-145859/client.crt: no such file or directory
E0830 21:02:51.498790  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/calico-145859/client.crt: no such file or directory
E0830 21:02:53.119565  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/bridge-145859/client.crt: no such file or directory
E0830 21:02:55.270706  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/flannel-145859/client.crt: no such file or directory
E0830 21:03:00.396686  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/enable-default-cni-145859/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-263591 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.1: (1m13.157319689s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (73.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-263591 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-263591 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.019142831s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-263591 --alsologtostderr -v=3
E0830 21:03:05.148530  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/custom-flannel-145859/client.crt: no such file or directory
E0830 21:03:11.460901  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/auto-145859/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-263591 --alsologtostderr -v=3: (8.108593785s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-263591 -n newest-cni-263591
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-263591 -n newest-cni-263591: exit status 7 (67.433014ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-263591 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-263591 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.1
E0830 21:03:13.600322  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/bridge-145859/client.crt: no such file or directory
E0830 21:03:30.600476  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/kubenet-145859/client.crt: no such file or directory
E0830 21:03:30.605762  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/kubenet-145859/client.crt: no such file or directory
E0830 21:03:30.616011  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/kubenet-145859/client.crt: no such file or directory
E0830 21:03:30.636269  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/kubenet-145859/client.crt: no such file or directory
E0830 21:03:30.676557  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/kubenet-145859/client.crt: no such file or directory
E0830 21:03:30.756661  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/kubenet-145859/client.crt: no such file or directory
E0830 21:03:30.917111  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/kubenet-145859/client.crt: no such file or directory
E0830 21:03:31.237712  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/kubenet-145859/client.crt: no such file or directory
E0830 21:03:31.878873  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/kubenet-145859/client.crt: no such file or directory
E0830 21:03:33.159345  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/kubenet-145859/client.crt: no such file or directory
E0830 21:03:35.720600  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/kubenet-145859/client.crt: no such file or directory
E0830 21:03:36.231369  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/flannel-145859/client.crt: no such file or directory
E0830 21:03:40.841815  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/kubenet-145859/client.crt: no such file or directory
E0830 21:03:51.082320  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/kubenet-145859/client.crt: no such file or directory
E0830 21:03:53.858277  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/false-145859/client.crt: no such file or directory
E0830 21:03:54.036899  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/skaffold-318104/client.crt: no such file or directory
E0830 21:03:54.560491  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/bridge-145859/client.crt: no such file or directory
E0830 21:03:56.850813  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/kindnet-145859/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-263591 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.1: (46.721037165s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-263591 -n newest-cni-263591
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (47.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-263591 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-263591 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-263591 -n newest-cni-263591
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-263591 -n newest-cni-263591: exit status 2 (260.71485ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-263591 -n newest-cni-263591
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-263591 -n newest-cni-263591: exit status 2 (262.519373ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-263591 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-263591 -n newest-cni-263591
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-263591 -n newest-cni-263591
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-nvk5b" [518a571d-2e13-4f18-9999-933ca04d2485] Running
E0830 21:05:07.655047  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/calico-145859/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.019822591s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-nvk5b" [518a571d-2e13-4f18-9999-933ca04d2485] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012608634s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-706840 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-706840 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.46s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-706840 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-706840 -n no-preload-706840
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-706840 -n no-preload-706840: exit status 2 (250.603164ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-706840 -n no-preload-706840
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-706840 -n no-preload-706840: exit status 2 (248.098637ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-706840 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-706840 -n no-preload-706840
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-706840 -n no-preload-706840
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-v69w2" [21c5bc57-3c9d-4a2b-a461-d7e91616e8f7] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.021624301s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-v69w2" [21c5bc57-3c9d-4a2b-a461-d7e91616e8f7] Running
E0830 21:05:58.028266  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/old-k8s-version-899372/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013960863s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-991570 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-991570 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-991570 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-991570 -n embed-certs-991570
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-991570 -n embed-certs-991570: exit status 2 (237.989753ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-991570 -n embed-certs-991570
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-991570 -n embed-certs-991570: exit status 2 (253.387894ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-991570 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-991570 -n embed-certs-991570
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-991570 -n embed-certs-991570
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-p5rls" [ec84b307-1608-4a9b-8172-2a6728a3ecc5] Running
E0830 21:06:37.694434  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/gvisor-450430/client.crt: no such file or directory
E0830 21:06:37.698595  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/false-145859/client.crt: no such file or directory
E0830 21:06:38.472910  229347 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/enable-default-cni-145859/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.021135436s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-p5rls" [ec84b307-1608-4a9b-8172-2a6728a3ecc5] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011360539s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-385479 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-385479 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-385479 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-385479 -n default-k8s-diff-port-385479
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-385479 -n default-k8s-diff-port-385479: exit status 2 (255.570461ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-385479 -n default-k8s-diff-port-385479
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-385479 -n default-k8s-diff-port-385479: exit status 2 (243.666337ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-385479 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-385479 -n default-k8s-diff-port-385479
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-385479 -n default-k8s-diff-port-385479
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.39s)

                                                
                                    

Test skip (31/317)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:210: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:296: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-145859 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-145859

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-145859

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-145859

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-145859

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-145859

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-145859

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-145859

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-145859

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-145859

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-145859

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-145859" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-145859"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-145859" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-145859"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-145859" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-145859"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-145859

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-145859" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-145859"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-145859" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-145859"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-145859" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-145859" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-145859" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-145859" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-145859" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-145859" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-145859" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-145859" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-145859" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-145859"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-145859" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-145859"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-145859" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-145859"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-145859" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-145859"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-145859" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-145859"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-145859

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-145859

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-145859" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-145859" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-145859

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-145859

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-145859" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-145859" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-145859" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-145859" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-145859" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-145859" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-145859"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-145859" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-145859"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-145859" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-145859"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-145859" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-145859"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-145859" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-145859"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-145859

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-145859" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-145859"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-145859" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-145859"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-145859" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-145859"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-145859" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-145859"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-145859" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-145859"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-145859" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-145859"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-145859" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-145859"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-145859" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-145859"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-145859" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-145859"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-145859" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-145859"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-145859" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-145859"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-145859" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-145859"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-145859" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-145859"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-145859" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-145859"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-145859" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-145859"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-145859" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-145859"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-145859" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-145859"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-145859" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-145859"

                                                
                                                
----------------------- debugLogs end: cilium-145859 [took: 3.13711702s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-145859" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-145859
--- SKIP: TestNetworkPlugins/group/cilium (3.27s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-783784" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-783784
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard